Underwood, Peter; Waterson, Patrick
2014-07-01
The Swiss Cheese Model (SCM) is the most popular accident causation model and is widely used throughout various industries. A debate exists in the research literature over whether the SCM remains a viable tool for accident analysis. Critics of the model suggest that it provides a sequential, oversimplified view of accidents. Conversely, proponents suggest that it embodies the concepts of systems theory, as per the contemporary systemic analysis techniques. The aim of this paper was to consider whether the SCM can provide a systems thinking approach and remain a viable option for accident analysis. To achieve this, the train derailment at Grayrigg was analysed with an SCM-based model (the ATSB accident investigation model) and two systemic accident analysis methods (AcciMap and STAMP). The analysis outputs and usage of the techniques were compared. The findings of the study showed that each model applied the systems thinking approach. However, the ATSB model and AcciMap graphically presented their findings in a more succinct manner, whereas STAMP more clearly embodied the concepts of systems theory. The study suggests that, whilst the selection of an analysis method is subject to trade-offs that practitioners and researchers must make, the SCM remains a viable model for accident analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.
Simulation skill of APCC set of global climate models for Asian summer monsoon rainfall variability
NASA Astrophysics Data System (ADS)
Singh, U. K.; Singh, G. P.; Singh, Vikas
2015-04-01
The performance of 11 Asia-Pacific Economic Cooperation Climate Center (APCC) global climate models (coupled and uncoupled both) in simulating the seasonal summer (June-August) monsoon rainfall variability over Asia (especially over India and East Asia) has been evaluated in detail using hind-cast data (3 months advance) generated from APCC which provides the regional climate information product services based on multi-model ensemble dynamical seasonal prediction systems. The skill of each global climate model over Asia was tested separately in detail for the period of 21 years (1983-2003), and simulated Asian summer monsoon rainfall (ASMR) has been verified using various statistical measures for Indian and East Asian land masses separately. The analysis found a large variation in spatial ASMR simulated with uncoupled model compared to coupled models (like Predictive Ocean Atmosphere Model for Australia, National Centers for Environmental Prediction and Japan Meteorological Agency). The simulated ASMR in coupled model was closer to Climate Prediction Centre Merged Analysis of Precipitation (CMAP) compared to uncoupled models although the amount of ASMR was underestimated in both models. Analysis also found a high spread in simulated ASMR among the ensemble members (suggesting that the model's performance is highly dependent on its initial conditions). The correlation analysis between sea surface temperature (SST) and ASMR shows that that the coupled models are strongly associated with ASMR compared to the uncoupled models (suggesting that air-sea interaction is well cared in coupled models). The analysis of rainfall using various statistical measures suggests that the multi-model ensemble (MME) performed better compared to individual model and also separate study indicate that Indian and East Asian land masses are more useful compared to Asia monsoon rainfall as a whole. The results of various statistical measures like skill of multi-model ensemble, large spread among the ensemble members of individual model, strong teleconnection (correlation analysis) with SST, coefficient of variation, inter-annual variability, analysis of Taylor diagram, etc. suggest that there is a need to improve coupled model instead of uncoupled model for the development of a better dynamical seasonal forecast system.
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less
NASA Astrophysics Data System (ADS)
Wu, Sangwook
2016-04-01
The three transmembrane and the four transmembrane helix models are suggested for human vitamin K epoxide reductase (VKOR). In this study, we investigate the stability of the human three transmembrane/four transmembrane VKOR models by employing a coarse-grained normal mode analysis and molecular dynamics simulation. Based on the analysis of the mobility of each transmembrane domain, we suggest that the three transmembrane human VKOR model is more stable than the four transmembrane human VKOR model.
Discrete response patterns in the upper range of hypnotic suggestibility: A latent profile analysis.
Terhune, Devin Blair
2015-05-01
High hypnotic suggestibility is a heterogeneous condition and there is accumulating evidence that highly suggestible individuals may be comprised of discrete subtypes with dissimilar cognitive and phenomenological profiles. This study applied latent profile analysis to response patterns on a diverse battery of difficult hypnotic suggestions in a sample of individuals in the upper range of hypnotic suggestibility. Comparisons among models indicated that a four-class model was optimal. One class was comprised of very highly suggestible (virtuoso) participants, two classes included highly suggestible participants who were alternately more responsive to inhibitory cognitive suggestions or posthypnotic amnesia suggestions, and the fourth class consisted primarily of medium suggestible participants. These results indicate that there are discrete response profiles in high hypnotic suggestibility. They further provide a number of insights regarding the optimization of hypnotic suggestibility measurement and have implications for the instrumental use of hypnosis for the modeling of different psychological conditions. Copyright © 2015 Elsevier Inc. All rights reserved.
Isolating the anthropogenic component of Arctic warming
Chylek, Petr; Hengartner, Nicholas; Lesins, Glen; ...
2014-05-28
Structural equation modeling is used in statistical applications as both confirmatory and exploratory modeling to test models and to suggest the most plausible explanation for a relationship between the independent and the dependent variables. Although structural analysis cannot prove causation, it can suggest the most plausible set of factors that influence the observed variable. Here, we apply structural model analysis to the annual mean Arctic surface air temperature from 1900 to 2012 to find the most effective set of predictors and to isolate the anthropogenic component of the recent Arctic warming by subtracting the effects of natural forcing and variabilitymore » from the observed temperature. We also find that anthropogenic greenhouse gases and aerosols radiative forcing and the Atlantic Multidecadal Oscillation internal mode dominate Arctic temperature variability. Finally, our structural model analysis of observational data suggests that about half of the recent Arctic warming of 0.64 K/decade may have anthropogenic causes.« less
Improve FREQ macroscopic freeway analysis model
DOT National Transportation Integrated Search
2008-07-01
The primary objectives of this project have been to provide technical assistance on district freeway analysis projects, enhance the FREQ model based on guidance and suggestions from Caltrans staff members, and offer three freeway analysis workshops f...
Latent transition analysis of pre-service teachers' efficacy in mathematics and science
NASA Astrophysics Data System (ADS)
Ward, Elizabeth Kennedy
This study modeled changes in pre-service teacher efficacy in mathematics and science over the course of the final year of teacher preparation using latent transition analysis (LTA), a longitudinal form of analysis that builds on two modeling traditions (latent class analysis (LCA) and auto-regressive modeling). Data were collected using the STEBI-B, MTEBI-r, and the ABNTMS instruments. The findings suggest that LTA is a viable technique for use in teacher efficacy research. Teacher efficacy is modeled as a construct with two dimensions: personal teaching efficacy (PTE) and outcome expectancy (OE). Findings suggest that the mathematics and science teaching efficacy (PTE) of pre-service teachers is a multi-class phenomena. The analyses revealed a four-class model of PTE at the beginning and end of the final year of teacher training. Results indicate that when pre-service teachers transition between classes, they tend to move from a lower efficacy class into a higher efficacy class. In addition, the findings suggest that time-varying variables (attitudes and beliefs) and time-invariant variables (previous coursework, previous experiences, and teacher perceptions) are statistically significant predictors of efficacy class membership. Further, analyses suggest that the measures used to assess outcome expectancy are not suitable for LCA and LTA procedures.
Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee
2013-07-01
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Tom P.
This report presents a new approach to analyze the relationship between vehicle mass and risk: tracking fatality risk by vehicle model year and mass, for individual vehicle models. This approach is appealing as it greatly minimizes the influence of driver characteristics and behavior, and crash circumstances, on fatality risk. However, only the most popular vehicle models, with the largest number of fatalities, can be analyzed in this manner. While the analysis of all vehicle models of a given type suggests that there is a relationship between increased mass and fatality risk, analysis of the ten most popular four-door car modelsmore » separately suggests that this relationship is weak: in many cases when the mass of a specific vehicle model is increased societal fatality risk is unchanged or even increases. These results suggest that increasing the mass of an individual vehicle model does not necessarily lead to decreased societal fatality risk.« less
Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L
2018-02-01
Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.
A Noncentral "t" Regression Model for Meta-Analysis
ERIC Educational Resources Information Center
Camilli, Gregory; de la Torre, Jimmy; Chiu, Chia-Yi
2010-01-01
In this article, three multilevel models for meta-analysis are examined. Hedges and Olkin suggested that effect sizes follow a noncentral "t" distribution and proposed several approximate methods. Raudenbush and Bryk further refined this model; however, this procedure is based on a normal approximation. In the current research literature, this…
Zhu, Ming; Zhang, Jie; Nie, Shaofa; Yan, Weirong
2012-09-01
There have been many studies concerning the associations of angiotensin-converting enzyme (ACE) I/D, angiotensinogen (AGT) M235T polymorphisms with pregnancy induced hypertension (PIH) among Chinese populations. However, the results were inconsistent, prompting the necessity of meta-analysis. Studies published in English and Chinese were mainly searched in EMbase, PubMed and CBM up to January 2012. Twenty-three studies with 3,551 subjects for ACE I/D and seven studies with 1,296 subjects for AGT M235T were included. Significant associations were found between ACE I/D and PIH under dominant, recessive and allelic models. A separate analysis confined to preeclampsia suggested that ACE I/D was associated with preeclampsia under recessive model and allelic model, but not dominant model. Stratified analyses were conducted as meta-regression analysis indicated that the sample size of case group was a significant source of heterogeneity, which suggested no significant association between ACE I/D and PIH in the subgroup of more than 100 cases. Associations were found between AGT M235T and PIH under dominant genetic model (OR = 1.59; 95 %CI: 1.04-2.42), recessive genetic model (OR = 1.60; 95 %CI: 1.07-2.40), and allelic model (OR = 1.40; 95 %CI: 1.17-1.68). No publication bias was found in either meta-analysis. The present meta-analysis suggested significant associations between ACE I/D, AGT M235T and PIH in Chinese populations. However, no significant association was found between ACE I/D and PIH in the subgroup of more than 100 cases. Studies with larger sample sizes are necessary to investigate the associations between gene polymorphisms and PIH in Chinese populations.
A biologically based mathematical model for the induction of liver tumors in mice by dichloroacetic acid (DCA) has been developed from histopathologic analysis of the livers of exposed mice. This analysis suggests that following chronic exposure to DCA, carcinomas can arise dire...
Requirements analysis, domain knowledge, and design
NASA Technical Reports Server (NTRS)
Potts, Colin
1988-01-01
Two improvements to current requirements analysis practices are suggested: domain modeling, and the systematic application of analysis heuristics. Domain modeling is the representation of relevant application knowledge prior to requirements specification. Artificial intelligence techniques may eventually be applicable for domain modeling. In the short term, however, restricted domain modeling techniques, such as that in JSD, will still be of practical benefit. Analysis heuristics are standard patterns of reasoning about the requirements. They usually generate questions of clarification or issues relating to completeness. Analysis heuristics can be represented and therefore systematically applied in an issue-based framework. This is illustrated by an issue-based analysis of JSD's domain modeling and functional specification heuristics. They are discussed in the context of the preliminary design of simple embedded systems.
Molecular Modeling in Drug Design for the Development of Organophosphorus Antidotes/Prophylactics.
1986-06-01
multidimensional statistical QSAR analysis techniques to suggest new structures for synthesis and evaluation. C. Application of quantum chemical techniques to...compounds for synthesis and testing for antidotal potency. E. Use of computer-assisted methods to determine the steric constraints at the active site...modeling techniques to model the enzyme acetylcholinester-se. H. Suggestion of some novel compounds for synthesis and testing for reactivating
Model prototype utilization in the analysis of fault tolerant control and data processing systems
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Tsarev, R. Yu; Gruzenkin, D. V.; Prokopenko, A. V.; Knyazkov, A. N.; Laptenok, V. D.
2016-04-01
The procedure assessing the profit of control and data processing system implementation is presented in the paper. The reasonability of model prototype creation and analysis results from the implementing of the approach of fault tolerance provision through the inclusion of structural and software assessment redundancy. The developed procedure allows finding the best ratio between the development cost and the analysis of model prototype and earnings from the results of this utilization and information produced. The suggested approach has been illustrated by the model example of profit assessment and analysis of control and data processing system.
Computational modeling of peripheral pain: a commentary.
Argüello, Erick J; Silva, Ricardo J; Huerta, Mónica K; Avila, René S
2015-06-11
This commentary is intended to find possible explanations for the low impact of computational modeling on pain research. We discuss the main strategies that have been used in building computational models for the study of pain. The analysis suggests that traditional models lack biological plausibility at some levels, they do not provide clinically relevant results, and they cannot capture the stochastic character of neural dynamics. On this basis, we provide some suggestions that may be useful in building computational models of pain with a wider range of applications.
NASA Astrophysics Data System (ADS)
Guerin, Marianne
2001-10-01
An analysis of tritium and 36Cl data collected at Yucca Mountain, Nevada suggests that fracture flow may occur at high velocities through the thick unsaturated zone. The mechanisms and extent of this "fast flow" in fractures at Yucca Mountain are investigated with data analysis, mixing models and several one-dimensional modeling scenarios. The model results and data analysis provide evidence substantiating the weeps model [Gauthier, J.H., Wilson, M.L., Lauffer, F.C., 1992. Proceedings of the Third Annual International High-level Radioactive Waste Management Conference, vol. 1, Las Vegas, NV. American Nuclear Society, La Grange Park, IL, pp. 891-989] and suggest that fast flow in fractures with minimal fracture-matrix interaction may comprise a substantial proportion of the total infiltration through Yucca Mountain. Mixing calculations suggest that bomb-pulse tritium measurements, in general, represent the tail end of travel times for thermonuclear-test-era (bomb-pulse) infiltration. The data analysis shows that bomb-pulse tritium and 36Cl measurements are correlated with discrete features such as horizontal fractures and areas where lateral flow may occur. The results presented here imply that fast flow in fractures may be ubiquitous at Yucca Mountain, occurring when transient infiltration (storms) generates flow in the connected fracture network.
Guerin, M
2001-10-01
An analysis of tritium and 36Cl data collected at Yucca Mountain, Nevada suggests that fracture flow may occur at high velocities through the thick unsaturated zone. The mechanisms and extent of this "fast flow" in fractures at Yucca Mountain are investigated with data analysis, mixing models and several one-dimensional modeling scenarios. The model results and data analysis provide evidence substantiating the weeps model [Gauthier, J.H., Wilson, M.L., Lauffer, F.C., 1992. Proceedings of the Third Annual International High-level Radioactive Waste Management Conference, vol. 1, Las Vegas, NV. American Nuclear Society, La Grange Park, IL, pp. 891-989] and suggest that fast flow in fractures with minimal fracture-matrix interaction may comprise a substantial proportion of the total infiltration through Yucca Mountain. Mixing calculations suggest that bomb-pulse tritium measurements, in general, represent the tail end of travel times for thermonuclear-test-era (bomb-pulse) infiltration. The data analysis shows that bomb-pulse tritium and 36Cl measurements are correlated with discrete features such as horizontal fractures and areas where lateral flow may occur. The results presented here imply that fast flow in fractures may be ubiquitous at Yucca Mountain, occurring when transient infiltration (storms) generates flow in the connected fracture network.
Bardhan, Jaydeep P; Knepley, Matthew G
2011-09-28
We analyze the mathematically rigorous BIBEE (boundary-integral based electrostatics estimation) approximation of the mixed-dielectric continuum model of molecular electrostatics, using the analytically solvable case of a spherical solute containing an arbitrary charge distribution. Our analysis, which builds on Kirkwood's solution using spherical harmonics, clarifies important aspects of the approximation and its relationship to generalized Born models. First, our results suggest a new perspective for analyzing fast electrostatic models: the separation of variables between material properties (the dielectric constants) and geometry (the solute dielectric boundary and charge distribution). Second, we find that the eigenfunctions of the reaction-potential operator are exactly preserved in the BIBEE model for the sphere, which supports the use of this approximation for analyzing charge-charge interactions in molecular binding. Third, a comparison of BIBEE to the recent GBε theory suggests a modified BIBEE model capable of predicting electrostatic solvation free energies to within 4% of a full numerical Poisson calculation. This modified model leads to a projection-framework understanding of BIBEE and suggests opportunities for future improvements. © 2011 American Institute of Physics
Differentiating Categories and Dimensions: Evaluating the Robustness of Taxometric Analyses
ERIC Educational Resources Information Center
Ruscio, John; Kaczetow, Walter
2009-01-01
Interest in modeling the structure of latent variables is gaining momentum, and many simulation studies suggest that taxometric analysis can validly assess the relative fit of categorical and dimensional models. The generation and parallel analysis of categorical and dimensional comparison data sets reduces the subjectivity required to interpret…
A Rational Analysis of the Selection Task as Optimal Data Selection.
ERIC Educational Resources Information Center
Oaksford, Mike; Chater, Nick
1994-01-01
Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)
Guikema, Seth
2012-07-01
Intelligent adversary modeling has become increasingly important for risk analysis, and a number of different approaches have been proposed for incorporating intelligent adversaries in risk analysis models. However, these approaches are based on a range of often-implicit assumptions about the desirable properties of intelligent adversary models. This "Perspective" paper aims to further risk analysis for situations involving intelligent adversaries by fostering a discussion of the desirable properties for these models. A set of four basic necessary conditions for intelligent adversary models is proposed and discussed. These are: (1) behavioral accuracy to the degree possible, (2) computational tractability to support decision making, (3) explicit consideration of uncertainty, and (4) ability to gain confidence in the model. It is hoped that these suggested necessary conditions foster discussion about the goals and assumptions underlying intelligent adversary modeling in risk analysis. © 2011 Society for Risk Analysis.
Kowalczuk, Maria K; Dudbridge, Frank; Nanda, Shreeya; Harriman, Stephanie L; Patel, Jigisha; Moylan, Elizabeth C
2015-01-01
Objectives To assess whether reports from reviewers recommended by authors show a bias in quality and recommendation for editorial decision, compared with reviewers suggested by other parties, and whether reviewer reports for journals operating on open or single-blind peer review models differ with regard to report quality and reviewer recommendations. Design Retrospective analysis of the quality of reviewer reports using an established Review Quality Instrument, and analysis of reviewer recommendations and author satisfaction surveys. Setting BioMed Central biology and medical journals. BMC Infectious Diseases and BMC Microbiology are similar in size, rejection rates, impact factors and editorial processes, but the former uses open peer review while the latter uses single-blind peer review. The Journal of Inflammation has operated under both peer review models. Sample Two hundred reviewer reports submitted to BMC Infectious Diseases, 200 reviewer reports submitted to BMC Microbiology and 400 reviewer reports submitted to the Journal of Inflammation. Results For each journal, author-suggested reviewers provided reports of comparable quality to non-author-suggested reviewers, but were significantly more likely to recommend acceptance, irrespective of the peer review model (p<0.0001 for BMC Infectious Diseases, BMC Microbiology and the Journal of Inflammation). For BMC Infectious Diseases, the overall quality of reviewer reports measured by the Review Quality Instrument was 5% higher than for BMC Microbiology (p=0.042). For the Journal of Inflammation, the quality of reports was the same irrespective of the peer review model used. Conclusions Reviewers suggested by authors provide reports of comparable quality to non-author-suggested reviewers, but are significantly more likely to recommend acceptance. Open peer review reports for BMC Infectious Diseases were of higher quality than single-blind reports for BMC Microbiology. There was no difference in quality of peer review in the Journal of Inflammation under open peer review compared with single blind. PMID:26423855
Modeling imbalanced economic recovery following a natural disaster using input-output analysis.
Li, Jun; Crawford-Brown, Douglas; Syddall, Mark; Guan, Dabo
2013-10-01
Input-output analysis is frequently used in studies of large-scale weather-related (e.g., Hurricanes and flooding) disruption of a regional economy. The economy after a sudden catastrophe shows a multitude of imbalances with respect to demand and production and may take months or years to recover. However, there is no consensus about how the economy recovers. This article presents a theoretical route map for imbalanced economic recovery called dynamic inequalities. Subsequently, it is applied to a hypothetical postdisaster economic scenario of flooding in London around the year 2020 to assess the influence of future shocks to a regional economy and suggest adaptation measures. Economic projections are produced by a macro econometric model and used as baseline conditions. The results suggest that London's economy would recover over approximately 70 months by applying a proportional rationing scheme under the assumption of initial 50% labor loss (with full recovery in six months), 40% initial loss to service sectors, and 10-30% initial loss to other sectors. The results also suggest that imbalance will be the norm during the postdisaster period of economic recovery even though balance may occur temporarily. Model sensitivity analysis suggests that a proportional rationing scheme may be an effective strategy to apply during postdisaster economic reconstruction, and that policies in transportation recovery and in health care are essential for effective postdisaster economic recovery. © 2013 Society for Risk Analysis.
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Tamez-Peña, Jose-Gerardo; Rodriguez-Rojas, Juan-Andrés; Gomez-Rueda, Hugo; Celaya-Padilla, Jose-Maria; Rivera-Prieto, Roxana-Alicia; Palacios-Corona, Rebeca; Garza-Montemayor, Margarita; Cardona-Huerta, Servando; Treviño, Victor
2018-01-01
In breast cancer, well-known gene expression subtypes have been related to a specific clinical outcome. However, their impact on the breast tissue phenotype has been poorly studied. Here, we investigate the association of imaging data of tumors to gene expression signatures from 71 patients with breast cancer that underwent pre-treatment digital mammograms and tumor biopsies. From digital mammograms, a semi-automated radiogenomics analysis generated 1,078 features describing the shape, signal distribution, and texture of tumors along their contralateral image used as control. From tumor biopsy, we estimated the OncotypeDX and PAM50 recurrence scores using gene expression microarrays. Then, we used multivariate analysis under stringent cross-validation to train models predicting recurrence scores. Few univariate features reached Spearman correlation coefficients above 0.4. Nevertheless, multivariate analysis yielded significantly correlated models for both signatures (correlation of OncotypeDX = 0.49 ± 0.07 and PAM50 = 0.32 ± 0.10 in stringent cross-validation and OncotypeDX = 0.83 and PAM50 = 0.78 for a unique model). Equivalent models trained from the unaffected contralateral breast were not correlated suggesting that the image signatures were tumor-specific and that overfitting was not a considerable issue. We also noted that models were improved by combining clinical information (triple negative status and progesterone receptor). The models used mostly wavelets and fractal features suggesting their importance to capture tumor information. Our results suggest that molecular-based recurrence risk and breast cancer subtypes have observable radiographic phenotypes. To our knowledge, this is the first study associating mammographic information to gene expression recurrence signatures.
Tamez-Peña, Jose-Gerardo; Rodriguez-Rojas, Juan-Andrés; Gomez-Rueda, Hugo; Celaya-Padilla, Jose-Maria; Rivera-Prieto, Roxana-Alicia; Palacios-Corona, Rebeca; Garza-Montemayor, Margarita; Cardona-Huerta, Servando
2018-01-01
In breast cancer, well-known gene expression subtypes have been related to a specific clinical outcome. However, their impact on the breast tissue phenotype has been poorly studied. Here, we investigate the association of imaging data of tumors to gene expression signatures from 71 patients with breast cancer that underwent pre-treatment digital mammograms and tumor biopsies. From digital mammograms, a semi-automated radiogenomics analysis generated 1,078 features describing the shape, signal distribution, and texture of tumors along their contralateral image used as control. From tumor biopsy, we estimated the OncotypeDX and PAM50 recurrence scores using gene expression microarrays. Then, we used multivariate analysis under stringent cross-validation to train models predicting recurrence scores. Few univariate features reached Spearman correlation coefficients above 0.4. Nevertheless, multivariate analysis yielded significantly correlated models for both signatures (correlation of OncotypeDX = 0.49 ± 0.07 and PAM50 = 0.32 ± 0.10 in stringent cross-validation and OncotypeDX = 0.83 and PAM50 = 0.78 for a unique model). Equivalent models trained from the unaffected contralateral breast were not correlated suggesting that the image signatures were tumor-specific and that overfitting was not a considerable issue. We also noted that models were improved by combining clinical information (triple negative status and progesterone receptor). The models used mostly wavelets and fractal features suggesting their importance to capture tumor information. Our results suggest that molecular-based recurrence risk and breast cancer subtypes have observable radiographic phenotypes. To our knowledge, this is the first study associating mammographic information to gene expression recurrence signatures. PMID:29596496
Kreienkamp, Amelia B.; Liu, Lucy Y.; Minkara, Mona S.; Knepley, Matthew G.; Bardhan, Jaydeep P.; Radhakrishnan, Mala L.
2013-01-01
We analyze and suggest improvements to a recently developed approximate continuum-electrostatic model for proteins. The model, called BIBEE/I (boundary-integral based electrostatics estimation with interpolation), was able to estimate electrostatic solvation free energies to within a mean unsigned error of 4% on a test set of more than 600 proteins—a significant improvement over previous BIBEE models. In this work, we tested the BIBEE/I model for its capability to predict residue-by-residue interactions in protein–protein binding, using the widely studied model system of trypsin and bovine pancreatic trypsin inhibitor (BPTI). Finding that the BIBEE/I model performs surprisingly less well in this task than simpler BIBEE models, we seek to explain this behavior in terms of the models’ differing spectral approximations of the exact boundary-integral operator. Calculations of analytically solvable systems (spheres and tri-axial ellipsoids) suggest two possibilities for improvement. The first is a modified BIBEE/I approach that captures the asymptotic eigenvalue limit correctly, and the second involves the dipole and quadrupole modes for ellipsoidal approximations of protein geometries. Our analysis suggests that fast, rigorous approximate models derived from reduced-basis approximation of boundary-integral equations might reach unprecedented accuracy, if the dipole and quadrupole modes can be captured quickly for general shapes. PMID:24466561
Using Structural Equation Models with Latent Variables to Study Student Growth and Development.
ERIC Educational Resources Information Center
Pike, Gary R.
1991-01-01
Analysis of data on freshman-to-senior developmental gains in 722 University of Tennessee-Knoxville students provides evidence of the advantages of structural equation modeling with latent variables and suggests that the group differences identified by traditional analysis of variance and covariance techniques may be an artifact of measurement…
Classes in the Balance: Latent Class Analysis and the Balance Scale Task
ERIC Educational Resources Information Center
Boom, Jan; ter Laak, Jan
2007-01-01
Latent class analysis (LCA) has been successfully applied to tasks measuring higher cognitive functioning, suggesting the existence of distinct strategies used in such tasks. With LCA it became possible to classify post hoc. This important step forward in modeling and analyzing cognitive strategies is relevant to the overlapping waves model for…
A case for poroelasticity in skeletal muscle finite element analysis: experiment and modeling.
Wheatley, Benjamin B; Odegard, Gregory M; Kaufman, Kenton R; Haut Donahue, Tammy L
2017-05-01
Finite element models of skeletal muscle typically ignore the biphasic nature of the tissue, associating any time dependence with a viscoelastic formulation. In this study, direct experimental measurement of permeability was conducted as a function of specimen orientation and strain. A finite element model was developed to identify how various permeability formulations affect compressive response of the tissue. Experimental and modeling results suggest the assumption of a constant, isotropic permeability is appropriate. A viscoelastic only model differed considerably from a visco-poroelastic model, suggesting the latter is more appropriate for compressive studies.
NASA Astrophysics Data System (ADS)
La Vigna, Francesco; Hill, Mary C.; Rossetto, Rudy; Mazza, Roberto
2016-09-01
With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20 % less pumped water, but would require installing newly positioned wells and cooperation between mine owners.
Weighted analysis of paired microarray experiments.
Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle
2005-01-01
In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.
The Communication Model Perspective of Oral Interpretation.
ERIC Educational Resources Information Center
Peterson, Eric E.
Communication models suggest that oral interpretation is a communicative process, that this process may be represented by specification of implicit and explicit content and structure, and that the models themselves are useful. This paper examines these assumptions through a comparative analysis of communication models employed by oral…
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
The adaptive safety analysis and monitoring system
NASA Astrophysics Data System (ADS)
Tu, Haiying; Allanach, Jeffrey; Singh, Satnam; Pattipati, Krishna R.; Willett, Peter
2004-09-01
The Adaptive Safety Analysis and Monitoring (ASAM) system is a hybrid model-based software tool for assisting intelligence analysts to identify terrorist threats, to predict possible evolution of the terrorist activities, and to suggest strategies for countering terrorism. The ASAM system provides a distributed processing structure for gathering, sharing, understanding, and using information to assess and predict terrorist network states. In combination with counter-terrorist network models, it can also suggest feasible actions to inhibit potential terrorist threats. In this paper, we will introduce the architecture of the ASAM system, and discuss the hybrid modeling approach embedded in it, viz., Hidden Markov Models (HMMs) to detect and provide soft evidence on the states of terrorist network nodes based on partial and imperfect observations, and Bayesian networks (BNs) to integrate soft evidence from multiple HMMs. The functionality of the ASAM system is illustrated by way of application to the Indian Airlines Hijacking, as modeled from open sources.
NASA Astrophysics Data System (ADS)
di Stefano, Marco; Paulsen, Jonas; Lien, Tonje G.; Hovig, Eivind; Micheletti, Cristian
2016-10-01
Combining genome-wide structural models with phenomenological data is at the forefront of efforts to understand the organizational principles regulating the human genome. Here, we use chromosome-chromosome contact data as knowledge-based constraints for large-scale three-dimensional models of the human diploid genome. The resulting models remain minimally entangled and acquire several functional features that are observed in vivo and that were never used as input for the model. We find, for instance, that gene-rich, active regions are drawn towards the nuclear center, while gene poor and lamina associated domains are pushed to the periphery. These and other properties persist upon adding local contact constraints, suggesting their compatibility with non-local constraints for the genome organization. The results show that suitable combinations of data analysis and physical modelling can expose the unexpectedly rich functionally-related properties implicit in chromosome-chromosome contact data. Specific directions are suggested for further developments based on combining experimental data analysis and genomic structural modelling.
Di Stefano, Marco; Paulsen, Jonas; Lien, Tonje G; Hovig, Eivind; Micheletti, Cristian
2016-10-27
Combining genome-wide structural models with phenomenological data is at the forefront of efforts to understand the organizational principles regulating the human genome. Here, we use chromosome-chromosome contact data as knowledge-based constraints for large-scale three-dimensional models of the human diploid genome. The resulting models remain minimally entangled and acquire several functional features that are observed in vivo and that were never used as input for the model. We find, for instance, that gene-rich, active regions are drawn towards the nuclear center, while gene poor and lamina associated domains are pushed to the periphery. These and other properties persist upon adding local contact constraints, suggesting their compatibility with non-local constraints for the genome organization. The results show that suitable combinations of data analysis and physical modelling can expose the unexpectedly rich functionally-related properties implicit in chromosome-chromosome contact data. Specific directions are suggested for further developments based on combining experimental data analysis and genomic structural modelling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-09-01
This document presents a modeling and control study of the Fluid Bed Gasification (FBG) unit at the Morgantown Energy Technology Center (METC). The work is performed under contract no. DE-FG21-94MC31384. The purpose of this study is to generate a simple FBG model from process data, and then use the model to suggest an improved control scheme which will improve operation of the gasifier. The work first developes a simple linear model of the gasifier, then suggests an improved gasifier pressure and MGCR control configuration, and finally suggests the use of a multivariable control strategy for the gasifier.
ERIC Educational Resources Information Center
Chou, Yeh-Tai; Wang, Wen-Chung
2010-01-01
Dimensionality is an important assumption in item response theory (IRT). Principal component analysis on standardized residuals has been used to check dimensionality, especially under the family of Rasch models. It has been suggested that an eigenvalue greater than 1.5 for the first eigenvalue signifies a violation of unidimensionality when there…
Report on Spending Trends Highlights Inequities in Model for Financing Colleges
ERIC Educational Resources Information Center
Blumenstyk, Goldie
2009-01-01
An analysis of spending trends that is designed to discourage policy makers' focus on finding new revenue rather than reining in spending suggests that the model for financing colleges has reinforced educational inequities and failed to increase the rate at which students graduate. According to the analysis, "serious fault lines" in the current…
Note on Professor Sizer's Paper.
ERIC Educational Resources Information Center
Balderston, Frederick E.
1979-01-01
Issues suggested by John Sizer's paper, an overview of the assessment of institutional performance, include: the efficient-frontier approach, multiple-criterion decision-making models, performance analysis approached as path analysis, and assessment of academic quality. (JMD)
Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M
2017-04-01
A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hara, Kenju; Kuwano, Ryozo; Miyashita, Akinori; Kokubo, Yasumasa; Sasaki, Ryogen; Nakahara, Yasuo; Goto, Jun; Nishizawa, Masatoyo; Kuzuhara, Shigeki; Tsuji, Shoji
2007-11-01
Recent clinical research have revealed that more than 70% of the patients with ALS/PDC, which is highly prevalent in Hohara area in the Kii peninsula, have family history. 80% of Guamanian patients, who have identical pathological findings to those of ALS/PDC in Kii, are also known to have family history with non-Mendelian trait. These facts suggest strong genetic predisposition to ALS/PDC in both Kii and Guam. However, no genes associated with ALS/PDC have been identified by molecular genetic studies using candidate gene approach. To identify the causative or susceptibility genes for ALS/PDC, we have conducted a genomewide linkage analysis for five families with ALS/PDC in Hohara. The fact that affected individuals were ascertained in successive generations suggest an autosomal dominant (AD) inheritance, while the presence of consanguinity suggests an autosomal recessive (AR) inheritance. Although we can raise possibilities of AD model with incomplete penetrance or AR model with high gene frequency (pseudo-dominant model), the mode of inheritance of ALS/PDC families is complicated and controversial. Therefore, we are also conducting model-free (non-parametric) linkage analysis to identify the disease locus without setting mode of inheritance. More family members and detailed clinical evaluations are required to obtain the convincing evidence of linkage.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-11-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-01-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087
APPLE - An aeroelastic analysis system for turbomachines and propfans
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Srivastava, R.; Mehmed, Oral
1992-01-01
This paper reviews aeroelastic analysis methods for propulsion elements (advanced propellers, compressors and turbines) being developed and used at NASA Lewis Research Center. These aeroelastic models include both structural and aerodynamic components. The structural models include the typical section model, the beam model with and without disk flexibility, and the finite element blade model with plate bending elements. The aerodynamic models are based on the solution of equations ranging from the two-dimensional linear potential equation for a cascade to the three-dimensional Euler equations for multi-blade configurations. Typical results are presented for each aeroelastic model. Suggestions for further research are indicated. All the available aeroelastic models and analysis methods are being incorporated into a unified computer program named APPLE (Aeroelasticity Program for Propulsion at LEwis).
Kowalczuk, Maria K; Dudbridge, Frank; Nanda, Shreeya; Harriman, Stephanie L; Patel, Jigisha; Moylan, Elizabeth C
2015-09-29
To assess whether reports from reviewers recommended by authors show a bias in quality and recommendation for editorial decision, compared with reviewers suggested by other parties, and whether reviewer reports for journals operating on open or single-blind peer review models differ with regard to report quality and reviewer recommendations. Retrospective analysis of the quality of reviewer reports using an established Review Quality Instrument, and analysis of reviewer recommendations and author satisfaction surveys. BioMed Central biology and medical journals. BMC Infectious Diseases and BMC Microbiology are similar in size, rejection rates, impact factors and editorial processes, but the former uses open peer review while the latter uses single-blind peer review. The Journal of Inflammation has operated under both peer review models. Two hundred reviewer reports submitted to BMC Infectious Diseases, 200 reviewer reports submitted to BMC Microbiology and 400 reviewer reports submitted to the Journal of Inflammation. For each journal, author-suggested reviewers provided reports of comparable quality to non-author-suggested reviewers, but were significantly more likely to recommend acceptance, irrespective of the peer review model (p<0.0001 for BMC Infectious Diseases, BMC Microbiology and the Journal of Inflammation). For BMC Infectious Diseases, the overall quality of reviewer reports measured by the Review Quality Instrument was 5% higher than for BMC Microbiology (p=0.042). For the Journal of Inflammation, the quality of reports was the same irrespective of the peer review model used. Reviewers suggested by authors provide reports of comparable quality to non-author-suggested reviewers, but are significantly more likely to recommend acceptance. Open peer review reports for BMC Infectious Diseases were of higher quality than single-blind reports for BMC Microbiology. There was no difference in quality of peer review in the Journal of Inflammation under open peer review compared with single blind. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Correlating the EMC analysis and testing methods for space systems in MIL-STD-1541A
NASA Technical Reports Server (NTRS)
Perez, Reinaldo J.
1990-01-01
A study was conducted to improve the correlation between the electromagnetic compatibility (EMC) analysis models stated in MIL-STD-1541A and the suggested testing methods used for space systems. The test and analysis methods outlined in MIL-STD-1541A are described, and a comparative assessment of testing and analysis techniques as they relate to several EMC areas is presented. Suggestions on present analysis and test methods are introduced to harmonize and bring the analysis and testing tools in MIL-STD-1541A into closer agreement. It is suggested that test procedures in MIL-STD-1541A must be improved by providing alternatives to the present use of shielded enclosures as the primary site for such tests. In addition, the alternate use of anechoic chambers and open field test sites must be considered.
Landsat analysis of tropical forest succession employing a terrain model
NASA Technical Reports Server (NTRS)
Barringer, T. H.; Robinson, V. B.; Coiner, J. C.; Bruce, R. C.
1980-01-01
Landsat multispectral scanner (MSS) data have yielded a dual classification of rain forest and shadow in an analysis of a semi-deciduous forest on Mindonoro Island, Philippines. Both a spatial terrain model, using a fifth side polynomial trend surface analysis for quantitatively estimating the general spatial variation in the data set, and a spectral terrain model, based on the MSS data, have been set up. A discriminant analysis, using both sets of data, has suggested that shadowing effects may be due primarily to local variations in the spectral regions and can therefore be compensated for through the decomposition of the spatial variation in both elevation and MSS data.
Multivariate Analysis of Seismic Field Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, M. Kathleen
1999-06-01
This report includes the details of the model building procedure and prediction of seismic field data. Principal Components Regression, a multivariate analysis technique, was used to model seismic data collected as two pieces of equipment were cycled on and off. Models built that included only the two pieces of equipment of interest had trouble predicting data containing signals not included in the model. Evidence for poor predictions came from the prediction curves as well as spectral F-ratio plots. Once the extraneous signals were included in the model, predictions improved dramatically. While Principal Components Regression performed well for the present datamore » sets, the present data analysis suggests further work will be needed to develop more robust modeling methods as the data become more complex.« less
Transportation Impact Evaluation System
DOT National Transportation Integrated Search
1979-11-01
This report specifies a framework for spatial analysis and the general modelling steps required. It also suggests available urban and regional data sources, along with some typical existing urban and regional models. The goal is to develop a computer...
Using decision tree analysis to identify risk factors for relapse to smoking
Piper, Megan E.; Loh, Wei-Yin; Smith, Stevens S.; Japuntich, Sandra J.; Baker, Timothy B.
2010-01-01
This research used classification tree analysis and logistic regression models to identify risk factors related to short- and long-term abstinence. Baseline and cessation outcome data from two smoking cessation trials, conducted from 2001 to 2002, in two Midwestern urban areas, were analyzed. There were 928 participants (53.1% women, 81.8% white) with complete data. Both analyses suggest that relapse risk is produced by interactions of risk factors and that early and late cessation outcomes reflect different vulnerability factors. The results illustrate the dynamic nature of relapse risk and suggest the importance of efficient modeling of interactions in relapse prediction. PMID:20397871
Advanced Weapon System (AWS) Sensor Prediction Techniques Study. Volume II
1981-09-01
models are suggested. TV. 1-1 ’ICourant Com’p’uter Sctence Report #9 December 1975 Scene Analysis: A Survey Carl Weiman Cou rant Institute of...some crucial differences. In the psycho- logical model of mechanical vision, the aim of scene analysis is to perceive and understand 2-0 images of 3-D...scenes. The meaning of this analogy can be clarified using a rudimentary informational model ; this yields a natural hierarchy from physical
2016-12-01
Conceptual Models,” includes a thorough analysis of Turkey’s involvement in the F-35 program, based on Allison’s Rational Actor and Organizational ...TuAF, but also suggested an organizational structure similar to the U.S. DOD. In May 1949, the Turkish Parliament passed a law to reform the Turkish... organizational behavior model and a governmental politics model provide a base for improved explanations and predictions. (Allison & Zelikow, 1999) 40
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1974-01-01
Included in the report are: (1) review of the erythropoietic mechanisms; (2) an evaluation of existing models for the control of erythropoiesis; (3) a computer simulation of the model's response to hypoxia; (4) an hypothesis to explain observed decreases in red blood cell mass during weightlessness; (5) suggestions for further research; and (6) an assessment of the role that systems analysis can play in the Skylab hematological program.
Spanish Velar-Insertion and Analogy: A Usage-Based Diachronic Analysis
ERIC Educational Resources Information Center
Fondow, Steven Richard
2010-01-01
The theory of Analogical and Exemplar Modeling (AEM) suggests renewed discussion of the formalization of analogy and its possible incorporation in linguistic theory. AEM is a usage-based model founded upon Exemplar Modeling (Bybee 2007, Pierrehumbert 2001) that utilizes several principles of the Analogical Modeling of Language (Skousen 1992, 1995,…
Large-scale measurement and modeling of backbone Internet traffic
NASA Astrophysics Data System (ADS)
Roughan, Matthew; Gottlieb, Joel
2002-07-01
There is a brewing controversy in the traffic modeling community concerning how to model backbone traffic. The fundamental work on self-similarity in data traffic appears to be contradicted by recent findings that suggest that backbone traffic is smooth. The traffic analysis work to date has focused on high-quality but limited-scope packet trace measurements; this limits its applicability to high-speed backbone traffic. This paper uses more than one year's worth of SNMP traffic data covering an entire Tier 1 ISP backbone to address the question of how backbone network traffic should be modeled. Although the limitations of SNMP measurements do not permit us to comment on the fine timescale behavior of the traffic, careful analysis of the data suggests that irrespective of the variation at fine timescales, we can construct a simple traffic model that captures key features of the observed traffic. Furthermore, the model's parameters are measurable using existing network infrastructure, making this model practical in a present-day operational network. In addition to its practicality, the model verifies basic statistical multiplexing results, and thus sheds deep insight into how smooth backbone traffic really is.
The role of minerals and mean annual temperature on soil carbon accumulation: A modeling analysis
NASA Astrophysics Data System (ADS)
Abramoff, R. Z.; Georgiou, K.; Tang, J.; Torn, M. S.; Riley, W. J.
2016-12-01
Soil organic carbon (SOC) is the largest actively cycling terrestrial C pool with mean residence times that can exceed 10,000 years. There is strong evidence suggesting that SOC dynamics depend on soil temperature and C inputs to soil through net primary production (NPP), but it is unclear what the relative importance of these factors is relative to SOC protection by minerals. Recent empirical studies have suggested that mineral protection explains more variation in SOC stock sizes and C respiration fluxes than does NPP or climate. Our previous modeling has demonstrated that representing the chemistry of mineral sorption in a microbially-explicit model affects the temperature sensitivity of SOC dynamics. We apply this modeling framework to interpret observations of SOC stocks, mineral surface availability, mean annual temperature (MAT), and NPP collected along a 4,000 km transect in South America. We use a Random Forest machine learning algorithm and regression to analyze our model output and the empirical data. This analysis shows that mineral surface availability is the dominant control over C respiration and SOC stock, and is substantially larger than the effects of belowground NPP. We further show that minerals interact with MAT to determine the observed range of SOC stocks along this transect in the present day, as well as projected SOC stocks under long-term warming. Our model-data comparison suggests that soil mineralogy and MAT will explain the majority of the spatial variation in SOC stock over decadal-to-millennial timescales. We extend the analysis of these interactions using the ACME Land Model (ALM) coupled with an explicit representation of microbes, minerals, and vertical transport of solutes and gases. The model results confirm the dominant effects of minerals on organic matter decomposition throughout the soil column.
Requirements for psychological models to support design: Towards ecological task analysis
NASA Technical Reports Server (NTRS)
Kirlik, Alex
1991-01-01
Cognitive engineering is largely concerned with creating environmental designs to support skillful and effective human activity. A set of necessary conditions are proposed for psychological models capable of supporting this enterprise. An analysis of the psychological nature of the design product is used to identify a set of constraints that models must meet if they can usefully guide design. It is concluded that cognitive engineering requires models with resources for describing the integrated human-environment system, and that these models must be capable of describing the activities underlying fluent and effective interaction. These features are required in order to be able to predict the cognitive activity that will be required given various design concepts, and to design systems that promote the acquisition of fluent, skilled behavior. These necessary conditions suggest that an ecological approach can provide valuable resources for psychological modeling to support design. Relying heavily on concepts from Brunswik's and Gibson's ecological theories, ecological task analysis is proposed as a framework in which to predict the types of cognitive activity required to achieve productive behavior, and to suggest how interfaces can be manipulated to alleviate certain types of cognitive demands. The framework is described in terms, and illustrated with an example from the previous research on modeling skilled human-environment interaction.
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
Structural analysis of two different stent configurations.
Simão, M; Ferreira, J M; Mora-Rodriguez, J; Ramos, H M
2017-06-01
Two different stent configurations (i.e. the well known Palmaz-Schatz (PS) and a new stent configuration) are mechanically investigated. A finite element model was used to study the two geometries under combining loads and a computational fluid dynamic model based on fluid structure interaction was developed investigating the plaque and the artery wall reactions in a stented arterial segment. These models determine the stress and displacement fields of the two stents under internal pressure conditions. Results suggested that stent designs cause alterations in vascular anatomy that adversely affect arterial stress distributions within the wall, which have impact in the vessel responses such as the restenosis. The hemodynamic analysis shows the use of new stent geometry suggests better biofluid mechanical response such as the deformation and the progressive amount of plaque growth.
Porro, Laura B; Holliday, Casey M; Anapol, Fred; Ontiveros, Lupita C; Ontiveros, Lolita T; Ross, Callum F
2011-08-01
The mechanical behavior of mammalian mandibles is well-studied, but a comprehensive biomechanical analysis (incorporating detailed muscle architecture, accurate material properties, and three-dimensional mechanical behavior) of an extant archosaur mandible has never been carried out. This makes it unclear how closely models of extant and extinct archosaur mandibles reflect reality and prevents comparisons of structure-function relationships in mammalian and archosaur mandibles. We tested hypotheses regarding the mechanical behavior of the mandible of Alligator mississippiensis by analyzing reaction forces and bending, shear, and torsional stress regimes in six models of varying complexity. Models included free body analysis using basic lever arm mechanics, 2D and 3D beam models, and three high-resolution finite element models of the Alligator mandible, incorporating, respectively, isotropic bone without sutures, anisotropic bone with sutures, and anisotropic bone with sutures and contact between the mandible and the pterygoid flange. Compared with the beam models, the Alligator finite element models exhibited less spatial variability in dorsoventral bending and sagittal shear stress, as well as lower peak values for these stresses, suggesting that Alligator mandibular morphology is in part designed to reduce these stresses during biting. However, the Alligator models exhibited greater variability in the distribution of mediolateral and torsional stresses than the beam models. Incorporating anisotropic bone material properties and sutures into the model reduced dorsoventral and torsional stresses within the mandible, but led to elevated mediolateral stresses. These mediolateral stresses were mitigated by the addition of a pterygoid-mandibular contact, suggesting important contributions from, and trade-offs between, material properties and external constraints in Alligator mandible design. Our results suggest that beam modeling does not accurately represent the mechanical behavior of the Alligator mandible, including important performance metrics such as magnitude and orientation of reaction forces, and mediolateral bending and torsional stress distributions. J.Morphol. 2011. © 2011 Wiley-Liss, Inc. Copyright © 2011 Wiley-Liss, Inc.
Perandini, Simone; Soardi, Gian Alberto; Motton, Massimiliano; Rossi, Arianna; Signorini, Manuel; Montemezzi, Stefania
2016-09-01
The aim of this study was to compare classification results from four major risk prediction models in a wide population of incidentally detected solitary pulmonary nodules (SPNs) which were selected to crossmatch inclusion criteria for the selected models. A total of 285 solitary pulmonary nodules with a definitive diagnosis were evaluated by means of four major risk assessment models developed from non-screening populations, namely the Mayo, Gurney, PKUPH and BIMC models. Accuracy was evaluated by receiver operating characteristic (ROC) area under the curve (AUC) analysis. Each model's fitness to provide reliable help in decision analysis was primarily assessed by adopting a surgical threshold of 65 % and an observation threshold of 5 % as suggested by ACCP guidelines. ROC AUC values, false positives, false negatives and indeterminate nodules were respectively 0.775, 3, 8, 227 (Mayo); 0.794, 41, 6, 125 (Gurney); 0.889, 42, 0, 144 (PKUPH); 0.898, 16, 0, 118 (BIMC). Resultant data suggests that the BIMC model may be of greater help than Mayo, Gurney and PKUPH models in preoperative SPN characterization when using ACCP risk thresholds because of overall better accuracy and smaller numbers of indeterminate nodules and false positive results. • The BIMC and PKUPH models offer better characterization than older prediction models • Both the PKUPH and BIMC models completely avoided false negative results • The Mayo model suffers from a large number of indeterminate results.
Improvements and validation of the erythropoiesis control model for bed rest simulation
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
The most significant improvement in the model is the explicit formulation of separate elements representing erythropoietin production and red cell production. Other modifications include bone marrow time-delays, capability to shift oxyhemoglobin affinity and an algorithm for entering experimental data as time-varying driving functions. An area of model development is suggested by applying the model to simulating onset, diagnosis and treatment of a hematologic disorder. Recommendations for further improvements in the model and suggestions for experimental application are also discussed. A detailed analysis of the hematologic response to bed rest including simulation of the recent Baylor Medical College bed rest studies is also presented.
Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik
2015-02-17
Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.
Preliminary synchrotron analysis of lead in hair from a lead smelter worker
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, R.R.; Kempson, I.M.; Naftel, S.J.
2008-06-09
Synchrotron X-ray fluorescence has been used to study the distribution of lead in a hair sample collected from a lead smelter worker. A mathematical model was used to imitate the transverse scan signal based on the analysis volume and concentration profiles. The results suggest that the Pb originates both from ingestion and environmental exposure, however direct deposition from the environment is the more important source of hair lead. The model could apply equally to any other analysis involving a thin cylindrical sample.
Dimensional Model for Estimating Factors influencing Childhood Obesity: Path Analysis Based Modeling
Kheirollahpour, Maryam; Shohaimi, Shamarina
2014-01-01
The main objective of this study is to identify and develop a comprehensive model which estimates and evaluates the overall relations among the factors that lead to weight gain in children by using structural equation modeling. The proposed models in this study explore the connection among the socioeconomic status of the family, parental feeding practice, and physical activity. Six structural models were tested to identify the direct and indirect relationship between the socioeconomic status and parental feeding practice general level of physical activity, and weight status of children. Finally, a comprehensive model was devised to show how these factors relate to each other as well as to the body mass index (BMI) of the children simultaneously. Concerning the methodology of the current study, confirmatory factor analysis (CFA) was applied to reveal the hidden (secondary) effect of socioeconomic factors on feeding practice and ultimately on the weight status of the children and also to determine the degree of model fit. The comprehensive structural model tested in this study suggested that there are significant direct and indirect relationships among variables of interest. Moreover, the results suggest that parental feeding practice and physical activity are mediators in the structural model. PMID:25097878
Enhancing Consumer Choice: Are We Making Appropriate Recommendations?
ERIC Educational Resources Information Center
Lee, Jinkook; Geistfeld, Loren V.
1998-01-01
This study used conjoint analysis to identify consumer choice models. Results suggest a need to base choice-making aids on ideal choice models if the aid is to lead consumers to decisions consistent with true preferences. (Author/JOW)
Hagopian, Louis P.; Rooker, Griffin W.; Zarcone, Jennifer R.; Bonner, Andrew C.; Arevalo, Alexander R.
2017-01-01
Hagopian, Rooker, and Zarcone (2015) evaluated a model for subtyping automatically reinforced self-injurious behavior (SIB) based on its sensitivity to changes in functional analysis conditions and the presence of self-restraint. The current study tested the generality of the model by applying it to all datasets of automatically reinforced SIB published from 1982 to 2015. We identified 49 datasets that included sufficient data to permit subtyping. Similar to the original study, Subtype-1 SIB was generally amenable to treatment using reinforcement alone, whereas Subtype-2 SIB was not. Conclusions could not be drawn about Subtype-3 SIB due to the small number of datasets. Nevertheless, the findings support the generality of the model and suggest that sensitivity of SIB to disruption by alternative reinforcement is an important dimension of automatically reinforced SIB. Findings also suggest that automatically reinforced SIB should no longer be considered a single category and that additional research is needed to better understand and treat Subtype-2 SIB. PMID:28032344
ERIC Educational Resources Information Center
Bryant, Alyssa N.
2011-01-01
Based upon a national longitudinal dataset of 14,527 college students generated by the UCLA Spirituality in Higher Education Project, this study used structural equation modeling to test the applicability of a model of ecumenical worldview development for students of diverse genders, races, and worldviews. The model suggests that challenging…
A COMPARATIVE ANALYSIS OF THE RESEARCH UTILIZATION PROCESS.
ERIC Educational Resources Information Center
LIPPITT, RONALD; AND OTHERS
A SUGGESTED MODEL FOR ADEQUATE DISSEMINATION OF RESEARCH FINDINGS CONSIDERS FOUR PRIMARY BARRIERS TO EFFECTIVE COMMUNICATION--(1) DIVISION OF PERSONNEL LABOR INTO TASK ROLES, (2) INSTITUTIONAL DISTINCTIONS, (3) DEVELOPMENT OF PROFESSIONAL REFERENCE GROUPS, AND (4) GEOGRAPHICAL DIVISIONS. SUGGESTED SOLUTIONS INCLUDE LINKING SYSTEMS AND ROLES,…
Fundamental Travel Demand Model Example
NASA Technical Reports Server (NTRS)
Hanssen, Joel
2010-01-01
Instances of transportation models are abundant and detailed "how to" instruction is available in the form of transportation software help documentation. The purpose of this paper is to look at the fundamental inputs required to build a transportation model by developing an example passenger travel demand model. The example model reduces the scale to a manageable size for the purpose of illustrating the data collection and analysis required before the first step of the model begins. This aspect of the model development would not reasonably be discussed in software help documentation (it is assumed the model developer comes prepared). Recommendations are derived from the example passenger travel demand model to suggest future work regarding the data collection and analysis required for a freight travel demand model.
Guan, Ming
2017-01-01
Since 1978, rural-urban migrants mainly contribute Chinese urbanization. The purpose of this paper is to examine the effects of socioeconomic factors on mental health of them. Their mental health was measured by 12-item general health questionnaire (GHQ-12). The study sample comprised 5925 migrants obtained from the 2009 rural-to-urban migrants survey (RUMiC). The relationships among the instruments were assessed by the correlation analysis. The one-factor (overall items), two-factor (positive vs. negative items), and model conducted by principal component analysis were tested in the confirmatory factor analysis (CFA). On the basis of three CFA models, the three multiple indicators multiple causes (MIMIC) models with age, gender, marriage, ethnicity, and employment were constructed to investigate the concurrent associations between socioeconomic factors and GHQ-12. Of the sample, only 1.94% were of ethnic origin and mean age was 31.63 (SD = ±10.43) years. The one-factor, two-factor, and three-factor structure (i.e. semi-positive/negative/independent usefulness) had good model fits in the CFA analysis and gave order (i.e. 2 factor>3 factor>1 factor), which suggests that the three models can be used to assess psychological symptoms of migrants in urban China. All MIMIC models had acceptable fit and gave order (i.e. one-dimensional model>two-dimensional model>three-dimensional model). There were weak associations of socioeconomic factors with mental health among migrants in urban China. Policy discussion suggested that improvement of socioeconomic status of rural-urban migrants and mental health systems in urban China should be highlighted and strengthened.
Model invariance across genders of the Broad Autism Phenotype Questionnaire.
Broderick, Neill; Wade, Jordan L; Meyer, J Patrick; Hull, Michael; Reeve, Ronald E
2015-10-01
ASD is one of the most heritable neuropsychiatric disorders, though comprehensive genetic liability remains elusive. To facilitate genetic research, researchers employ the concept of the broad autism phenotype (BAP), a milder presentation of traits in undiagnosed relatives. Research suggests that the BAP Questionnaire (BAPQ) demonstrates psychometric properties superior to other self-report measures. To examine evidence regarding validity of the BAPQ, the current study used confirmatory factor analysis to test the assumption of model invariance across genders. Results of the current study upheld model invariance at each level of parameter constraint; however, model fit indices suggested limited goodness-of-fit between the proposed model and the sample. Exploratory analyses investigated alternate factor structure models but ultimately supported the proposed three-factor structure model.
NASA Technical Reports Server (NTRS)
Christensen, E. J.; Haines, B. J.; Mccoll, K. C.; Nerem, R. S.
1994-01-01
We have compared Global Positioning System (GPS)-based dynamic and reduced-dynamic TOPEX/Poseidon orbits over three 10-day repeat cycles of the ground-track. The results suggest that the prelaunch joint gravity model (JGM-1) introduces geographically correlated errors (GCEs) which have a strong meridional dependence. The global distribution and magnitude of these GCEs are consistent with a prelaunch covariance analysis, with estimated and predicted global rms error statistics of 2.3 and 2.4 cm rms, respectively. Repeating the analysis with the post-launch joint gravity model (JGM-2) suggests that a portion of the meridional dependence observed in JGM-1 still remains, with global rms error of 1.2 cm.
Modeling learning in brain stem and cerebellar sites responsible for VOR plasticity
NASA Technical Reports Server (NTRS)
Quinn, K. J.; Didier, A. J.; Baker, J. F.; Peterson, B. W.
1998-01-01
A simple model of vestibuloocular reflex (VOR) function was used to analyze several hypotheses currently held concerning the characteristics of VOR plasticity. The network included a direct vestibular pathway and an indirect path via the cerebellum. An optimization analysis of this model suggests that regulation of brain stem sites is critical for the proper modification of VOR gain. A more physiologically plausible learning rule was also applied to this network. Analysis of these simulation results suggests that the preferred error correction signal controlling gain modification of the VOR is the direct output of the accessory optic system (AOS) to the vestibular nuclei vs. a signal relayed through the cerebellum via floccular Purkinje cells. The potential anatomical and physiological basis for this conclusion is discussed, in relation to our current understanding of the latency of the adapted VOR response.
Shandra, John M; Nobles, Jenna; London, Bruce; Williamson, John B
2004-07-01
This study presents quantitative, sociological models designed to account for cross-national variation in infant mortality rates. We consider variables linked to four different theoretical perspectives: the economic modernization, social modernization, political modernization, and dependency perspectives. The study is based on a panel regression analysis of a sample of 59 developing countries. Our preliminary analysis based on additive models replicates prior studies to the extent that we find that indicators linked to economic and social modernization have beneficial effects on infant mortality. We also find support for hypotheses derived from the dependency perspective suggesting that multinational corporate penetration fosters higher levels of infant mortality. Subsequent analysis incorporating interaction effects suggest that the level of political democracy conditions the effects of dependency relationships based upon exports, investments from multinational corporations, and international lending institutions. Transnational economic linkages associated with exports, multinational corporations, and international lending institutions adversely affect infant mortality more strongly at lower levels of democracy than at higher levels of democracy: intranational, political factors interact with the international, economic forces to affect infant mortality. We conclude with some brief policy recommendations and suggestions for the direction of future research.
Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.
2009-01-01
We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.
Stochastic model for gene transcription on Drosophila melanogaster embryos
NASA Astrophysics Data System (ADS)
Prata, Guilherme N.; Hornos, José Eduardo M.; Ramos, Alexandre F.
2016-02-01
We examine immunostaining experimental data for the formation of stripe 2 of even-skipped (eve) transcripts on D. melanogaster embryos. An estimate of the factor converting immunofluorescence intensity units into molecular numbers is given. The analysis of the eve dynamics at the region of stripe 2 suggests that the promoter site of the gene has two distinct regimes: an earlier phase when it is predominantly activated until a critical time when it becomes mainly repressed. That suggests proposing a stochastic binary model for gene transcription on D. melanogaster embryos. Our model has two random variables: the transcripts number and the state of the source of mRNAs given as active or repressed. We are able to reproduce available experimental data for the average number of transcripts. An analysis of the random fluctuations on the number of eves and their consequences on the spatial precision of stripe 2 is presented. We show that the position of the anterior or posterior borders fluctuate around their average position by ˜1 % of the embryo length, which is similar to what is found experimentally. The fitting of data by such a simple model suggests that it can be useful to understand the functions of randomness during developmental processes.
Jorquera, Mercedes; Baños, Rosa María; Cebolla, Ausiàs; Rasal, Paloma; Etchemendy, Ernestina
2012-05-01
The purpose of the present study was to analyse the psychometric properties of the 'Questionnaire of Sociocultural Influences on the Aesthetic Body Shape Model' (CIMEC-26) in a Spanish adolescent population. This questionnaire measures the influence of agents and situations that transmit the current aesthetic model, and assesses environmental influences favouring thinness. The CIMEC-26 was administered to a sample of 4031 female primary and secondary school students ranging in age from 10 to 17 years (M = 14, SD = 1.34). Results suggested that the CIMEC-26 has acceptable internal consistency (α = .93). The oldest group (15-17 years) had the highest scores on all factors and the highest total scores, suggesting greater influence of the aesthetic body shape model and higher vulnerability to social pressure to achieve it. Factor analysis suggested three moderately interrelated components of the scale. Confirmatory factor analysis showed that both the three-factor solution and the original five-factor structure had good fit indices, although the latter showed the best fit. The CIMEC-26 proved to be an effective instrument for research on the social influence on the aesthetic body model in female adolescents. Copyright © 2012 John Wiley & Sons, Ltd and Eating Disorders Association.
McAuley, E; Duncan, T; Tammen, V V
1989-03-01
The present study was designed to assess selected psychometric properties of the Intrinsic Motivation Inventory (IMI) (Ryan, 1982), a multidimensional measure of subjects' experience with regard to experimental tasks. Subjects (N = 116) competed in a basketball free-throw shooting game, following which they completed the IMI. The LISREL VI computer program was employed to conduct a confirmatory factor analysis to assess the tenability of a five factor hierarchical model representing four first-order factors or dimensions and a second-order general factor representing intrinsic motivation. Indices of model acceptability tentatively suggest that the sport data adequately fit the hypothesized five factor hierarchical model. Alternative models were tested but did not result in significant improvements in the goodness-of-fit indices, suggesting the proposed model to be the most accurate of the models tested. Coefficient alphas for the four dimensions and the overall scale indicated adequate reliability. The results are discussed with regard to the importance of accurate assessment of psychological constructs and the use of linear structural equations in confirming the factor structures of measures.
ERIC Educational Resources Information Center
Caplan, Joel M.; Kennedy, Leslie W.; Piza, Eric L.
2013-01-01
Violent crime incidents occurring in Irvington, New Jersey, in 2007 and 2008 are used to assess the joint analytical capabilities of point pattern analysis, hotspot mapping, near-repeat analysis, and risk terrain modeling. One approach to crime analysis suggests that the best way to predict future crime occurrence is to use past behavior, such as…
Abramov, Yuriy A
2015-06-01
The main purpose of this study is to define the major limiting factor in the accuracy of the quantitative structure-property relationship (QSPR) models of the thermodynamic intrinsic aqueous solubility of the drug-like compounds. For doing this, the thermodynamic intrinsic aqueous solubility property was suggested to be indirectly "measured" from the contributions of solid state, ΔGfus, and nonsolid state, ΔGmix, properties, which are estimated by the corresponding QSPR models. The QSPR models of ΔGfus and ΔGmix properties were built based on a set of drug-like compounds with available accurate measurements of fusion and thermodynamic solubility properties. For consistency ΔGfus and ΔGmix models were developed using similar algorithms and descriptor sets, and validated against the similar test compounds. Analysis of the relative performances of these two QSPR models clearly demonstrates that it is the solid state contribution which is the limiting factor in the accuracy and predictive power of the QSPR models of the thermodynamic intrinsic solubility. The performed analysis outlines a necessity of development of new descriptor sets for an accurate description of the long-range order (periodicity) phenomenon in the crystalline state. The proposed approach to the analysis of limitations and suggestions for improvement of QSPR-type models may be generalized to other applications in the pharmaceutical industry.
Attribution Theory and Crisis Intervention Therapy.
ERIC Educational Resources Information Center
Skilbeck, William M.
It was proposed that existing therapeutic procedures may influence attributions about emotional states. Therefore an attributional analysis of crisis intervention, a model of community-based, short-term consultation, was presented. This analysis suggested that crisis intervention provides attributionally-relevant information about both the source…
Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar
2013-06-24
A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.
Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform
Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong
2016-01-01
We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features. PMID:27304979
Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform.
Wu, Hau-Tieng; Wu, Han-Kuei; Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong
2016-01-01
We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features.
Developing guidance for budget impact analysis.
Trueman, P; Drummond, M; Hutton, J
2001-01-01
The role of economic evaluation in the efficient allocation of healthcare resources has been widely debated. Whilst economic evidence is undoubtedly useful to purchasers, it does not address the issue of affordability which is an increasing concern. Healthcare purchasers are concerned not just with maximising efficiency but also with the more simplistic goal of remaining within their annual budgets. These two objectives are not necessarily consistent. This paper examines the issue of affordability, the relationship between affordability and efficiency and builds the case for why there is a growing need for budget impact models to complement economic evaluation. Guidance currently available for such models is also examined and it is concluded that this guidance is currently insufficient. Some of these insufficiencies are addressed and some thoughts on what constitutes best practice in budget impact modelling are suggested. These suggestions include consideration of transparency, clarity of perspective, reliability of data sources, the relationship between intermediate and final end-points and rates of adoption of new therapies. They also include the impact of intervention by population subgroups or indications, reporting of results, probability of re-deploying resources, the time horizon, exploring uncertainty and sensitivity analysis, and decision-maker access to the model. Due to the nature of budget impact models, the paper does not deliver stringent methodological guidance on modelling. The intention was to provide some suggestions of best practice in addition to some foundations upon which future research can build.
Van Steen, Kristel; Curran, Desmond; Kramer, Jocelyn; Molenberghs, Geert; Van Vreckem, Ann; Bottomley, Andrew; Sylvester, Richard
2002-12-30
Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before. Copyright 2002 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ciocirlan, Cristina E.
The environmental economics literature consistently suggests that properly designed and implemented economic incentives are superior to command-and-control regulation in reducing pollution. Economic incentives, such as green taxes, cap-and-trade programs, tax incentives, are able to reduce pollution in a cost-effective manner, provide flexibility to industry and stimulate innovation in cleaner technologies. In the past few decades, both federal and state governments have shown increased use of economic incentives in environmental policy. Some states have embraced them in an active manner, while others have failed to do so. This research uses a three-step analysis. First, it asks why some states employ more economic incentives than others to stimulate consumption of renewable energy by the residential, commercial and industrial sectors. Second, it asks why some states employ stronger incentives than others. And third, it asks why certain states employ certain instruments, such as electricity surcharges, cap-and-trade programs, tax incentives or grants, while others do not. The first two analyses were conducted using factor analysis and multiple regression analysis, while the third analysis employed logistic regression models to analyze the data. Data for all three analyses were obtained from a combination of primary and secondary sources. To address these questions, a theory of instrument choice at the state level, which includes both internal and external determinants of policy-making, was developed and tested. The state level of analysis was chosen. States have proven to be pioneers in designing policies to address greenhouse gases (see, for instance, the recent cap-and-trade legislation passed in California). The theory was operationalized with the help of four models: needs/responsiveness, interest group influence, professionalism/capacity and innovation-and-diffusion. The needs/responsiveness model suggests that states tend to choose more and stronger economic incentives when they are more dependent on conventional sources of energy, such as coal, oil and gas or when they have the potential to produce renewable energy. The interest group influence model suggests that instrument choice is ultimately a political decision, most likely to benefit some groups more than others. The professionalism/capacity model posits that states with more professional legislatures, with legislators who make more use of policy analysis, with more capacity to generate nonpartisan policy research and with larger agencies tend to employ more and stronger instruments to stimulate renewable energy consumption and production. And last, the innovation-and-diffusion model suggests that states with a proven innovation record in climate change tend to employ more and stronger economic incentives than states without such record. Also, this model explains states' instrument choice decisions as a function of the choices made by their neighbors.
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1998-01-01
This report describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind tunnel model for active control design and analysis applications. The model is formed by combining the equations of motion for the BACT wind tunnel model with actuator models and a model of wind tunnel turbulence. The primary focus of this report is the development of the equations of motion from first principles by using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated by making use of parameters obtained from both experiment and analysis. Comparisons between experimental and analytical data obtained from the numerical model show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind tunnel model. The equations of motion developed herein have been used to aid in the design and analysis of a number of flutter suppression controllers that have been successfully implemented.
The influence of rear turn-signal characteristics on crash risk.
Sullivan, John M; Flannagan, Michael J
2012-02-01
The relationship between the relative risk of a rear-end collision during a turn, merge, or lane change maneuver and the characteristics of the rear turn-signal configuration was examined using crash data from seven states in the United States. Rear turn-signal characteristics-including color, optics, separation, and light source-were identified for 55 vehicle models and used in a logistic regression analysis to model the odds of a rear-end collision. Additional variables including driver demographics (gender, age), vehicle age, and light condition were also modeled. Risk was assessed using a contrast group of striking vehicles in similar collisions. The results suggest that the odds of being the struck vehicle were 3% to 28% lower among vehicles equipped with amber versus red turn signals. Although the analysis suggests that there may be a safety benefit associated with amber rear turn signals, it is unclear whether turn-signal color alone is responsible. The results suggest that aspects of a vehicle's rear signal characteristics may influence crash risk. Copyright © 2012 Elsevier Ltd. All rights reserved.
Rasch model based analysis of the Force Concept Inventory
NASA Astrophysics Data System (ADS)
Planinic, Maja; Ivanjek, Lana; Susac, Ana
2010-06-01
The Force Concept Inventory (FCI) is an important diagnostic instrument which is widely used in the field of physics education research. It is therefore very important to evaluate and monitor its functioning using different tools for statistical analysis. One of such tools is the stochastic Rasch model, which enables construction of linear measures for persons and items from raw test scores and which can provide important insight in the structure and functioning of the test (how item difficulties are distributed within the test, how well the items fit the model, and how well the items work together to define the underlying construct). The data for the Rasch analysis come from the large-scale research conducted in 2006-07, which investigated Croatian high school students’ conceptual understanding of mechanics on a representative sample of 1676 students (age 17-18 years). The instrument used in research was the FCI. The average FCI score for the whole sample was found to be (27.7±0.4)% , indicating that most of the students were still non-Newtonians at the end of high school, despite the fact that physics is a compulsory subject in Croatian schools. The large set of obtained data was analyzed with the Rasch measurement computer software WINSTEPS 3.66. Since the FCI is routinely used as pretest and post-test on two very different types of population (non-Newtonian and predominantly Newtonian), an additional predominantly Newtonian sample ( N=141 , average FCI score of 64.5%) of first year students enrolled in introductory physics course at University of Zagreb was also analyzed. The Rasch model based analysis suggests that the FCI has succeeded in defining a sufficiently unidimensional construct for each population. The analysis of fit of data to the model found no grossly misfitting items which would degrade measurement. Some items with larger misfit and items with significantly different difficulties in the two samples of students do require further examination. The analysis revealed some problems with item distribution in the FCI and suggested that the FCI may function differently in non-Newtonian and predominantly Newtonian population. Some possible improvements of the test are suggested.
Modeling the outcomes of nursing home care.
Rohrer, J E; Hogan, A J
1987-01-01
In this exploratory analysis using data on 290 patients, we use regression analysis to model patient outcomes in two Veterans Administration nursing homes. We find resource use, as measured with minutes of nursing time, to be associated with outcomes when case mix is controlled. Our results suggest that, under case-based reimbursement systems, nursing homes could increase their revenues by withholding unskilled and psychosocial care and discouraging physicians' visits. Implications for nursing home policy are discussed.
Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.
2016-01-01
Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.
Complex networks untangle competitive advantage in Australian football
NASA Astrophysics Data System (ADS)
Braham, Calum; Small, Michael
2018-05-01
We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.
Complex networks untangle competitive advantage in Australian football.
Braham, Calum; Small, Michael
2018-05-01
We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R 2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.
The probability heuristics model of syllogistic reasoning.
Chater, N; Oaksford, M
1999-03-01
A probability heuristic model (PHM) for syllogistic reasoning is proposed. An informational ordering over quantified statements suggests simple probability based heuristics for syllogistic reasoning. The most important is the "min-heuristic": choose the type of the least informative premise as the type of the conclusion. The rationality of this heuristic is confirmed by an analysis of the probabilistic validity of syllogistic reasoning which treats logical inference as a limiting case of probabilistic inference. A meta-analysis of past experiments reveals close fits with PHM. PHM also compares favorably with alternative accounts, including mental logics, mental models, and deduction as verbal reasoning. Crucially, PHM extends naturally to generalized quantifiers, such as Most and Few, which have not been characterized logically and are, consequently, beyond the scope of current mental logic and mental model theories. Two experiments confirm the novel predictions of PHM when generalized quantifiers are used in syllogistic arguments. PHM suggests that syllogistic reasoning performance may be determined by simple but rational informational strategies justified by probability theory rather than by logic. Copyright 1999 Academic Press.
Szalai, M; Szirmai, A; Füge, K; Makai, A; Erdélyi, G; Prémusz, V; Bódis, J
2017-11-01
Tumour-related peer support groups (PSGs) show long-term development in quality of life and coping, and decrease distress in cancer care. To clarify channels of social support in oncologic rehabilitation by combined exercise and psychosocial therapy, individual semi-structured interviews were conducted after 1 year additional belly dance rehabilitation in a closed PSG among 51 patients with malignant tumour diagnosis in Budapest, Hungary. Interview data were transcribed and analysed using qualitative content analysis (ATLAS.ti 6 Win). Results suggest that group experience provides emotional-, practical- and informational support. We could point out specific social effects of "role model" function and extend the coping model. The group dispose all the features of effective suggestion and may be effectively applied as additional therapy for patients with malignancies. The extended coping model and the introduction of "role model" function could be useful for PSGs' efficacy assessment. © 2017 John Wiley & Sons Ltd.
Mathematics and mallard management
Cowardin, L.M.; Johnson, D.H.
1979-01-01
Waterfowl managers can effectively use simple population models to aid in making management decisions. We present a basic model of the change in population size as related to survival and recruitment. A management technique designed to increase survival of mallards (Anas platyrhynchos) by limiting harvest on the Chippewa National Forest, Minnesota, is used to illustrate the application of models in decision making. The analysis suggests that the management technique would be of limited effectiveness. In a 2nd example, the change in mallard population in central North Dakota is related to implementing programs to create dense nesting cover with or without supplementary predator control. The analysis suggests that large tracts of land would be required to achieve a hypothetical management objective of increasing harvest by 50% while maintaining a stable population. Less land would be required if predator reduction were used in combination with cover management, but questions about effectiveness and ecological implications of large scale predator reduction remain unresolved. The use of models as a guide to planning research responsive to the needs of management is illustrated.
Li, Chen; Yichao, Jin; Jiaxin, Lin; Yueting, Zhang; Qin, Lu; Tonghua, Yang
2015-01-01
Reported evidence supports a role for methylenetetrahydrofolate reductase (MTHFR) in the risk of chronic myelogenous leykemia (CML). However, these reports arrived at non-conclusive and even conflicting results regarding the association between two common MTHFR polymorphisms (C677T and A1298C) and CML risk. Thus, a meta-analysis was carried out to clarify a more precise association between these two polymorphisms and the CML risk by updating the available publications. Pooled odds ratios (OR) with corresponding 95% confidence interval (95% CI) and stratification analysis were performed to estimate the relationship between MTHFR polymorphisms and the risk of CML under different genetic comparison models. Data from the meta-analysis showed no significant association between MTHFR C677T polymorphism and CML risk. However, significant associations were found between MTHFR A1298C variants and CML risk under homozygous comparison model (CC vs AA, OR=1.62, 95% CI=1.11-2.36, p=0.01) and dominant comparison model (CC+AC vs AA, OR=1.68, 95% CI=1.17-2.43, p=0.005) in overall population; especially more obvious impacts were noticed for Asian populations in subgroup analysis for homozygous model (CC vs AA, OR=2.00, 95% CI=1.25-3.21, p=0.004) and dominant model (CC+AC vs AA, OR=2.49, 95% CI=1.42-4.36, p=0.001), but this did not apply in Caucasian populations. The results of this meta-analysis suggested no significant association between MTHFR C677T polymorphism and CML risk, while an increased CML risk was noticed for 1298C variant carriers, especially in Asian populations but not in Caucasian populations, which suggested ethnicity differences between MTHFR A1298C polymorphisms and risk of CML.
Priestley, Tony; Chappa, Arvind K; Mould, Diane R; Upton, Richard N; Shusterman, Neil; Passik, Steven; Tormo, Vicente J; Camper, Stephen
2017-09-29
To develop a model to predict buprenorphine plasma concentrations during transition from transdermal to buccal administration. Population pharmacokinetic model-based meta-analysis of published data. A model-based meta-analysis of available buprenorphine pharmacokinetic data in healthy adults, extracted as aggregate (mean) data from published literature, was performed to explore potential conversion from transdermal to buccal buprenorphine. The time course of mean buprenorphine plasma concentrations following application of transdermal patch or buccal film was digitized from available literature, and a meta-model was developed using specific pharmacokinetic parameters (e.g., absorption rate, apparent clearance, and volumes of distribution) derived from analysis of pharmacokinetic data for intravenously, transdermally, and buccally administered buprenorphine. Data from six studies were included in this analysis. The final transdermal absorption model employed a zero-order input rate that was scaled to reflect a nominal patch delivery rate and time after patch application (with decline in rate over time). The transdermal absorption rate constant became zero following patch removal. Buccal absorption was a first-order process with a time lag and bioavailability term. Simulations of conversion from transdermal 20 mcg/h and 10 mcg/h to buccal administration suggest that transition can be made rapidly (beginning 12 hours after patch removal) using the recommended buccal formulation titration increments (75-150 mcg) and schedule (every four days) described in the product labeling. Computer modeling and simulations using a meta-model built from data extracted from publications suggest that rapid and straightforward conversion from transdermal to buccal buprenorphine is feasible. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Soares, Sérgio R A; Bernardes, Ricardo S; Netto, Oscar de M Cordeiro
2002-01-01
The understanding of sanitation infrastructure, public health, and environmental relations is a fundamental assumption for planning sanitation infrastructure in urban areas. This article thus suggests elements for developing a planning model for sanitation infrastructure. The authors performed a historical survey of environmental and public health issues related to the sector, an analysis of the conceptual frameworks involving public health and sanitation systems, and a systematization of the various effects that water supply and sanitation have on public health and the environment. Evaluation of these effects should guarantee the correct analysis of possible alternatives, deal with environmental and public health objectives (the main purpose of sanitation infrastructure), and provide the most reasonable indication of actions. The suggested systematization of the sanitation systems effects in each step of their implementation is an advance considering the association between the fundamental elements for formulating a planning model for sanitation infrastructure.
REVIEWS OF TOPICAL PROBLEMS: Nonlinear dynamics of the brain: emotion and cognition
NASA Astrophysics Data System (ADS)
Rabinovich, Mikhail I.; Muezzinoglu, M. K.
2010-07-01
Experimental investigations of neural system functioning and brain activity are standardly based on the assumption that perceptions, emotions, and cognitive functions can be understood by analyzing steady-state neural processes and static tomographic snapshots. The new approaches discussed in this review are based on the analysis of transient processes and metastable states. Transient dynamics is characterized by two basic properties, structural stability and information sensitivity. The ideas and methods that we discuss provide an explanation for the occurrence of and successive transitions between metastable states observed in experiments, and offer new approaches to behavior analysis. Models of the emotional and cognitive functions of the brain are suggested. The mathematical object that represents the observed transient brain processes in the phase space of the model is a structurally stable heteroclinic channel. The possibility of using the suggested models to construct a quantitative theory of some emotional and cognitive functions is illustrated.
An Effective Model of the Retinoic Acid Induced HL-60 Differentiation Program.
Tasseff, Ryan; Jensen, Holly A; Congleton, Johanna; Dai, David; Rogers, Katharine V; Sagar, Adithya; Bunaciu, Rodica P; Yen, Andrew; Varner, Jeffrey D
2017-10-30
In this study, we present an effective model All-Trans Retinoic Acid (ATRA)-induced differentiation of HL-60 cells. The model describes reinforcing feedback between an ATRA-inducible signalsome complex involving many proteins including Vav1, a guanine nucleotide exchange factor, and the activation of the mitogen activated protein kinase (MAPK) cascade. We decomposed the effective model into three modules; a signal initiation module that sensed and transformed an ATRA signal into program activation signals; a signal integration module that controlled the expression of upstream transcription factors; and a phenotype module which encoded the expression of functional differentiation markers from the ATRA-inducible transcription factors. We identified an ensemble of effective model parameters using measurements taken from ATRA-induced HL-60 cells. Using these parameters, model analysis predicted that MAPK activation was bistable as a function of ATRA exposure. Conformational experiments supported ATRA-induced bistability. Additionally, the model captured intermediate and phenotypic gene expression data. Knockout analysis suggested Gfi-1 and PPARg were critical to the ATRAinduced differentiation program. These findings, combined with other literature evidence, suggested that reinforcing feedback is central to hyperactive signaling in a diversity of cell fate programs.
Gary Bentrup
2001-01-01
Collaborative planning processes have become increasingly popular for addressing environmental planning issues, resulting in a number of conceptual models for collaboration. A model proposed by Selin and Chavez suggests that collaboration emerges from a series of antecedents and then proceeds sequentially through problem-setting, direction-setting, implementation, and...
Global climate change impacts on forests and markets
Xiaohui Tian; Brent Sohngen; John B Kim; Sara Ohrel; Jefferson Cole
2016-01-01
This paper develops an economic analysis of climate change impacts in the global forest sector. It illustrates how potential future climate change impacts can be integrated into a dynamic forestry economics model using data from a global dynamic vegetation model, theMC2model. The results suggest that climate change will cause forest outputs (such as timber) to increase...
ERIC Educational Resources Information Center
Larson, Kathleen G.; Long, George R.; Briggs, Michael W.
2012-01-01
The mental models of both novice and advanced chemistry students were observed while the students performed a periodic table activity. The mental model framework seems to be an effective way of analyzing student behavior during learning activities. The analysis suggests that students do not recognize periodic trends through the examination of…
ERIC Educational Resources Information Center
Becher, Ayelet; Orland-Barak, Lily
2016-01-01
This study suggests an integrative qualitative methodological framework for capturing complexity in mentoring activity. Specifically, the model examines how historical developments of a discipline direct mentors' mediation of professional knowledge through the language that they use. The model integrates social activity theory and a framework of…
Bartolucci, Chiara; Lombardo, Giovanni Pietro
2017-01-01
This article examines research on hypnosis and suggestion, starting with the nineteenth-century model proposed by Enrico Morselli (1852-1929), an illustrious Italian psychiatrist and psychologist. The authors conducted an original psychophysiological analysis of hypnosis, distancing the work from the neuropathological concept of the time and proposing a model based on a naturalistic approach to investigating mental processes. The issues investigated by Morselli, including the definition of hypnosis and analysis of specific mental processes such as attention and memory, are reviewed in light of modern research. From the view of modern neuroscientific concepts, some problems that originated in the nineteenth century still appear to be present and pose still-open questions.
Raudies, Florian; Neumann, Heiko
2012-01-01
The analysis of motion crowds is concerned with the detection of potential hazards for individuals of the crowd. Existing methods analyze the statistics of pixel motion to classify non-dangerous or dangerous behavior, to detect outlier motions, or to estimate the mean throughput of people for an image region. We suggest a biologically inspired model for the analysis of motion crowds that extracts motion features indicative for potential dangers in crowd behavior. Our model consists of stages for motion detection, integration, and pattern detection that model functions of the primate primary visual cortex area (V1), the middle temporal area (MT), and the medial superior temporal area (MST), respectively. This model allows for the processing of motion transparency, the appearance of multiple motions in the same visual region, in addition to processing opaque motion. We suggest that motion transparency helps to identify “danger zones” in motion crowds. For instance, motion transparency occurs in small exit passages during evacuation. However, motion transparency occurs also for non-dangerous crowd behavior when people move in opposite directions organized into separate lanes. Our analysis suggests: The combination of motion transparency and a slow motion speed can be used for labeling of candidate regions that contain dangerous behavior. In addition, locally detected decelerations or negative speed gradients of motions are a precursor of danger in crowd behavior as are globally detected motion patterns that show a contraction toward a single point. In sum, motion transparency, image speeds, motion patterns, and speed gradients extracted from visual motion in videos are important features to describe the behavioral state of a motion crowd. PMID:23300930
Christoper J. Schmitt; A. Dennis Lemly; Parley V. Winger
1993-01-01
Data from several sources were collated and analyzed by correlation, regression, and principal components analysis to define surrrogate variables for use in the brook trout (Salvelinus fontinalis) habitat suitability index (HSI) model, and to evaluate the applicability of the model for assessing habitat in high elevation streams of the southern Blue Ridge Province (...
Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection
Jones, Douglas E.; Dorman, Karin S.
2009-01-01
Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088
Conceptual Models of Depression in Primary Care Patients: A Comparative Study
Karasz, Alison; Garcia, Nerina; Ferri, Lucia
2009-01-01
Conventional psychiatric treatment models are based on a biopsychiatric model of depression. A plausible explanation for low rates of depression treatment utilization among ethnic minorities and the poor is that members of these communities do not share the cultural assumptions underlying the biopsychiatric model. The study examined conceptual models of depression among depressed patients from various ethnic groups, focusing on the degree to which patients’ conceptual models ‘matched’ a biopsychiatric model of depression. The sample included 74 primary care patients from three ethnic groups screening positive for depression. We administered qualitative interviews assessing patients’ conceptual representations of depression. The analysis proceeded in two phases. The first phase involved a strategy called ‘quantitizing’ the qualitative data. A rating scheme was developed and applied to the data by a rater blind to study hypotheses. The data was subjected to statistical analyses. The second phase of the analysis involved the analysis of thematic data using standard qualitative techniques. Study hypotheses were largely supported. The qualitative analysis provided a detailed picture of primary care patients’ conceptual models of depression and suggested interesting directions for future research. PMID:20182550
Factor Analysis of Drawings: Application to college student models of the greenhouse effect
NASA Astrophysics Data System (ADS)
Libarkin, Julie C.; Thomas, Stephen R.; Ording, Gabriel
2015-09-01
Exploratory factor analysis was used to identify models underlying drawings of the greenhouse effect made by over 200 entering university freshmen. Initial content analysis allowed deconstruction of drawings into salient features, with grouping of these features via factor analysis. A resulting 4-factor solution explains 62% of the data variance, suggesting that 4 archetype models of the greenhouse effect dominate thinking within this population. Factor scores, indicating the extent to which each student's drawing aligned with representative models, were compared to performance on conceptual understanding and attitudes measures, demographics, and non-cognitive features of drawings. Student drawings were also compared to drawings made by scientists to ascertain the extent to which models reflect more sophisticated and accurate models. Results indicate that student and scientist drawings share some similarities, most notably the presence of some features of the most sophisticated non-scientific model held among the study population. Prior knowledge, prior attitudes, gender, and non-cognitive components are also predictive of an individual student's model. This work presents a new technique for analyzing drawings, with general implications for the use of drawings in investigating student conceptions.
Windt, Jennifer M; Noreika, Valdas
2011-12-01
In this paper, we address the different ways in which dream research can contribute to interdisciplinary consciousness research. As a second global state of consciousness aside from wakefulness, dreaming is an important contrast condition for theories of waking consciousness. However, programmatic suggestions for integrating dreaming into broader theories of consciousness, for instance by regarding dreams as a model system of standard or pathological wake states, have not yielded straightforward results. We review existing proposals for using dreaming as a model system, taking into account concerns about the concept of modeling and the adequacy and practical feasibility of dreaming as a model system. We conclude that existing modeling approaches are premature and rely on controversial background assumptions. Instead, we suggest that contrastive analysis of dreaming and wakefulness presents a more promising strategy for integrating dreaming into a broader research context and solving many of the problems involved in the modeling approach. Copyright © 2010 Elsevier Inc. All rights reserved.
Kruger, Jen; Pollard, Daniel; Basarir, Hasan; Thokala, Praveen; Cooke, Debbie; Clark, Marie; Bond, Rod; Heller, Simon; Brennan, Alan
2015-10-01
. Health economic modeling has paid limited attention to the effects that patients' psychological characteristics have on the effectiveness of treatments. This case study tests 1) the feasibility of incorporating psychological prediction models of treatment response within an economic model of type 1 diabetes, 2) the potential value of providing treatment to a subgroup of patients, and 3) the cost-effectiveness of providing treatment to a subgroup of responders defined using 5 different algorithms. . Multiple linear regressions were used to investigate relationships between patients' psychological characteristics and treatment effectiveness. Two psychological prediction models were integrated with a patient-level simulation model of type 1 diabetes. Expected value of individualized care analysis was undertaken. Five different algorithms were used to provide treatment to a subgroup of predicted responders. A cost-effectiveness analysis compared using the algorithms to providing treatment to all patients. . The psychological prediction models had low predictive power for treatment effectiveness. Expected value of individualized care results suggested that targeting education at responders could be of value. The cost-effectiveness analysis suggested, for all 5 algorithms, that providing structured education to a subgroup of predicted responders would not be cost-effective. . The psychological prediction models tested did not have sufficient predictive power to make targeting treatment cost-effective. The psychological prediction models are simple linear models of psychological behavior. Collection of data on additional covariates could potentially increase statistical power. . By collecting data on psychological variables before an intervention, we can construct predictive models of treatment response to interventions. These predictive models can be incorporated into health economic models to investigate more complex service delivery and reimbursement strategies. © The Author(s) 2015.
Systemic Analysis Approaches for Air Transportation
NASA Technical Reports Server (NTRS)
Conway, Sheila
2005-01-01
Air transportation system designers have had only limited success using traditional operations research and parametric modeling approaches in their analyses of innovations. They need a systemic methodology for modeling of safety-critical infrastructure that is comprehensive, objective, and sufficiently concrete, yet simple enough to be used with reasonable investment. The methodology must also be amenable to quantitative analysis so issues of system safety and stability can be rigorously addressed. However, air transportation has proven itself an extensive, complex system whose behavior is difficult to describe, no less predict. There is a wide range of system analysis techniques available, but some are more appropriate for certain applications than others. Specifically in the area of complex system analysis, the literature suggests that both agent-based models and network analysis techniques may be useful. This paper discusses the theoretical basis for each approach in these applications, and explores their historic and potential further use for air transportation analysis.
Functional Relationships and Regression Analysis.
ERIC Educational Resources Information Center
Preece, Peter F. W.
1978-01-01
Using a degenerate multivariate normal model for the distribution of organismic variables, the form of least-squares regression analysis required to estimate a linear functional relationship between variables is derived. It is suggested that the two conventional regression lines may be considered to describe functional, not merely statistical,…
Prediction of ball and roller bearing thermal and kinematic performance by computer analysis
NASA Technical Reports Server (NTRS)
Pirvics, J.; Kleckner, R. J.
1983-01-01
Characteristics of good computerized analysis software are suggested. These general remarks and an overview of representative software precede a more detailed discussion of load support system analysis program structure. Particular attention is directed at a recent cylindrical roller bearing analysis as an example of the available design tools. Selected software modules are then examined to reveal the detail inherent in contemporary analysis. This leads to a brief section on current design computation which seeks to suggest when and why computerized analysis is warranted. An example concludes the argument offered for such design methodology. Finally, remarks are made concerning needs for model development to address effects which are now considered to be secondary but are anticipated to emerge to primary status in the near future.
Yang, Yi-Feng
2014-02-01
This paper discusses the effects of transformational leadership on cooperative conflict resolution (management) by evaluating several alternative models related to the mediating role of job satisfaction and change commitment. Samples of data from customer service personnel in Taiwan were analyzed. Based on the bootstrap sample technique, an empirical study was carried out to yield the best fitting model. The procedure of hierarchical nested model analysis was used, incorporating the methods of bootstrapping mediation, PRODCLIN2, and structural equation modeling (SEM) comparison. The analysis suggests that leadership that promotes integration (change commitment) and provides inspiration and motivation (job satisfaction), in the proper order, creates the means for cooperative conflict resolution.
TED analysis of the Si(113) surface structure
NASA Astrophysics Data System (ADS)
Suzuki, T.; Minoda, H.; Tanishiro, Y.; Yagi, K.
1999-09-01
We carried out a TED (transmission electron diffraction) analysis of the Si(113) surface structure. The TED patterns taken at room temperature showed reflections due to the 3×2 reconstructed structure. The TED pattern indicated that a glide plane parallel to the <332> direction suggested in some models is excluded. We calculated the R-factors (reliability factors) for six surface structure models proposed previously. All structure models with energy-optimized atomic positions have large R-factors. After revision of the atomic positions, the R-factors of all the structure models decreased below 0.3, and the revised version of Dabrowski's 3×2 model has the smallest R-factor of 0.17.
Roberson-Nay, R.; Kendler, K. S.
2014-01-01
Background Panic disorder (PD) is a heterogeneous syndrome that can present with a variety of symptom profiles that potentially reflect distinct etiologic pathways. The present study represents the most comprehensive examination of phenotypic variance in PD with and without agoraphobia for the purpose of identifying clinically relevant and etiologically meaningful subtypes. Method Latent class (LC) and factor mixture analysis were used to examine panic symptom data ascertained from three national epidemiologic surveys [Epidemiological Catchment Area (ECA), National Comorbidity Study (NCS), National Epidemiologic Survey on Alcohol and Related Conditions (NESARC), Wave 1], a twin study [Virginia Adult Twin Study of Psychiatric and Substance Use Disorders (VATSPSUD)] and a clinical trial (Cross-National Collaborative Panic Study [CNCPS]). Results Factor mixture models (versus LC) generally provided better fit to panic symptom data and suggested two panic classes for the ECA, VATSPSUD and CNCPS, with one class typified by prominent respiratory symptoms. The NCS yielded two classes, but suggested both qualitative and quantitative differences. The more contemporary NESARC sample supported a two and three class model, with the three class model suggesting two variants of respiratory panic. The NESARC’s three class model continued to provide the best fit when the model was restricted to a more severe form of PD/panic disorder with agoraphobia. Conclusions Results from epidemiologic and clinical samples suggest two panic subtypes, with one subtype characterized by a respiratory component and a second class typified by general somatic symptoms. Results are discussed in light of their relevance to the etiopathogenesis of PD. PMID:21557895
A smart growth evaluation model based on data envelopment analysis
NASA Astrophysics Data System (ADS)
Zhang, Xiaokun; Guan, Yongyi
2018-04-01
With the rapid spread of urbanization, smart growth (SG) has attracted plenty of attention from all over the world. In this paper, by the establishment of index system for smart growth, data envelopment analysis (DEA) model was suggested to evaluate the SG level of the current growth situation in cities. In order to further improve the information of both radial direction and non-radial detection, we introduced the non-Archimedean infinitesimal to form C2GS2 control model. Finally, we evaluated the SG level in Canberra and identified a series of problems, which can verify the applicability of the model and provide us more improvement information.
In-Depth Analysis of a Teacher's Experience Implementing Sport Education in an After-School Context
ERIC Educational Resources Information Center
Wahl-Alexander, Zachary; Schwamberger, Ben; Neels, Darren
2017-01-01
The Comprehensive School Physical Activity Program approach has been suggested to provide students with additional opportunities for physical activity (PA) outside of traditional physical education (PE). Although research suggests that this program is successful at increasing children's levels of PA, research on implementing pedagogical models to…
Cost Accounting: Problems and Research Related to Cost Definitions and Collection of Data
ERIC Educational Resources Information Center
Lyons, John M.
1978-01-01
Recent evidence suggests that traditional cost analysis may not be the most appropriate way to justify educational budgets. This article suggests that using constructed cost models to develop operating budget requests can help ensure that the distinction between legitimate information needs and managerial autonomy is maintained. (LBH)
Analysis of the NAEG model of transuranic radionuclide transport and dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kercher, J.R.; Anspaugh, L.R.
We analyze the model for estimating the dose from /sup 239/Pu developed for the Nevada Applied Ecology Group (NAEG) by using sensitivity analysis and uncertainty analysis. Sensitivity analysis results suggest that the air pathway is the critical pathway for the organs receiving the highest dose. Soil concentration and the factors controlling air concentration are the most important parameters. The only organ whose dose is sensitive to parameters in the ingestion pathway is the GI tract. The air pathway accounts for 100% of the dose to lung, upper respiratory tract, and thoracic lymph nodes; and 95% of its dose via ingestion.more » Leafy vegetable ingestion accounts for 70% of the dose from the ingestion pathway regardless of organ, peeled vegetables 20%; accidental soil ingestion 5%; ingestion of beef liver 4%; beef muscle 1%. Only a handful of model parameters control the dose for any one organ. The number of important parameters is usually less than 10. Uncertainty analysis indicates that choosing a uniform distribution for the input parameters produces a lognormal distribution of the dose. The ratio of the square root of the variance to the mean is three times greater for the doses than it is for the individual parameters. As found by the sensitivity analysis, the uncertainty analysis suggests that only a few parameters control the dose for each organ. All organs have similar distributions and variance to mean ratios except for the lymph modes. 16 references, 9 figures, 13 tables.« less
Assessment of Managed Aquifer Recharge Site Suitability Using a GIS and Modeling.
Russo, Tess A; Fisher, Andrew T; Lockwood, Brian S
2015-01-01
We completed a two-step regional analysis of a coastal groundwater basin to (1) assess regional suitability for managed aquifer recharge (MAR), and (2) quantify the relative impact of MAR activities on groundwater levels and sea water intrusion. The first step comprised an analysis of surface and subsurface hydrologic properties and conditions, using a geographic information system (GIS). Surface and subsurface data coverages were compiled, georeferenced, reclassified, and integrated (including novel approaches for combining related datasets) to derive a spatial distribution of MAR suitability values. In the second step, results from the GIS analysis were used with a regional groundwater model to assess the hydrologic impact of potential MAR placement and operating scenarios. For the region evaluated in this study, the Pajaro Valley Groundwater Basin, California, GIS results suggest that about 7% (15 km2) of the basin may be highly suitable for MAR. Modeling suggests that simulated MAR projects placed near the coast help to reduce sea water intrusion more rapidly, but these projects also result in increased groundwater flows to the ocean. In contrast, projects placed farther inland result in more long-term reduction in sea water intrusion and less groundwater flowing to the ocean. This work shows how combined GIS analysis and modeling can assist with regional water supply planning, including evaluation of options for enhancing groundwater resources. © 2014, National Ground Water Association.
ERIC Educational Resources Information Center
Hannan, Michael T.
This document is part of a series of chapters described in SO 011 759. Addressing the question of effective models to measure change and the change process, the author suggests that linear structural equation systems may be viewed as steady state outcomes of continuous-change models and have rich sociological grounding. Two interpretations of the…
Climatic impact of Amazon deforestation - a mechanistic model study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ning Zeng; Dickinson, R.E.; Xubin Zeng
1996-04-01
Recent general circulation model (GCM) experiments suggest a drastic change in the regional climate, especially the hydrological cycle, after hypothesized Amazon basinwide deforestation. To facilitate the theoretical understanding os such a change, we develop an intermediate-level model for tropical climatology, including atmosphere-land-ocean interaction. The model consists of linearized steady-state primitive equations with simplified thermodynamics. A simple hydrological cycle is also included. Special attention has been paid to land-surface processes. It generally better simulates tropical climatology and the ENSO anomaly than do many of the previous simple models. The climatic impact of Amazon deforestation is studied in the context of thismore » model. Model results show a much weakened Atlantic Walker-Hadley circulation as a result of the existence of a strong positive feedback loop in the atmospheric circulation system and the hydrological cycle. The regional climate is highly sensitive to albedo change and sensitive to evapotranspiration change. The pure dynamical effect of surface roughness length on convergence is small, but the surface flow anomaly displays intriguing features. Analysis of the thermodynamic equation reveals that the balance between convective heating, adiabatic cooling, and radiation largely determines the deforestation response. Studies of the consequences of hypothetical continuous deforestation suggest that the replacement of forest by desert may be able to sustain a dry climate. Scaling analysis motivated by our modeling efforts also helps to interpret the common results of many GCM simulations. When a simple mixed-layer ocean model is coupled with the atmospheric model, the results suggest a 1{degrees}C decrease in SST gradient across the equatorial Atlantic Ocean in response to Amazon deforestation. The magnitude depends on the coupling strength. 66 refs., 16 figs., 4 tabs.« less
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
A Review of Recent Aeroelastic Analysis Methods for Propulsion at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Srivastava, R.; Mehmed, Oral; Stefko, George L.
1993-01-01
This report reviews aeroelastic analyses for propulsion components (propfans, compressors and turbines) being developed and used at NASA LeRC. These aeroelastic analyses include both structural and aerodynamic models. The structural models include a typical section, a beam (with and without disk flexibility), and a finite-element blade model (with plate bending elements). The aerodynamic models are based on the solution of equations ranging from the two-dimensional linear potential equation to the three-dimensional Euler equations for multibladed configurations. Typical calculated results are presented for each aeroelastic model. Suggestions for further research are made. Many of the currently available aeroelastic models and analysis methods are being incorporated in a unified computer program, APPLE (Aeroelasticity Program for Propulsion at LEwis).
Lee, Yeonok; Wu, Hulin
2012-01-01
Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.
A Study of Pupil Control Ideology: A Person-Oriented Approach to Data Analysis
ERIC Educational Resources Information Center
Adwere-Boamah, Joseph
2010-01-01
Responses of urban school teachers to the Pupil Control Ideology questionnaire were studied using Latent Class Analysis. The results of the analysis suggest that the best fitting model to the data is a two-cluster solution. In particular, the pupil control ideology of the sample delineates into two clusters of teachers, those with humanistic and…
Word Fluency: A Task Analysis.
ERIC Educational Resources Information Center
Laine, Matti
It is suggested that models of human problem solving are useful in the analysis of word fluency (WF) test performance. In problem-solving terms, WF tasks would require the subject to define and clarify the conditions of the task (task acquisition), select and employ appropriate strategies, and monitor one's performance. In modern neuropsychology,…
The Relations Among Inhibition and Interference Control Functions: A Latent-Variable Analysis
ERIC Educational Resources Information Center
Friedman, Naomi P.; Miyake, Akira
2004-01-01
This study used data from 220 adults to examine the relations among 3 inhibition-related functions. Confirmatory factor analysis suggested that Prepotent Response Inhibition and Resistance to Distractor Interference were closely related, but both were unrelated to Resistance to Proactive Interference. Structural equation modeling, which combined…
Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L
2015-12-30
Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Bullying Prevention and the Parent Involvement Model
ERIC Educational Resources Information Center
Kolbert, Jered B.; Schultz, Danielle; Crothers, Laura M.
2014-01-01
A recent meta-analysis of bullying prevention programs provides support for social-ecological theory, in which parent involvement addressing child bullying behaviors is seen as important in preventing school-based bullying. The purpose of this manuscript is to suggest how Epstein and colleagues' parent involvement model can be used as a…
Professional Learning: A Fuzzy Logic-Based Modelling Approach
ERIC Educational Resources Information Center
Gravani, M. N.; Hadjileontiadou, S. J.; Nikolaidou, G. N.; Hadjileontiadis, L. J.
2007-01-01
Studies have suggested that professional learning is influenced by two key parameters, i.e., climate and planning, and their associated variables (mutual respect, collaboration, mutual trust, supportiveness, openness). In this paper, we applied analysis of the relationships between the proposed quantitative, fuzzy logic-based model and a series of…
The Effect of Urban Life on Traditional Values
ERIC Educational Resources Information Center
Fischer, Claude S.
1975-01-01
Three models are elaborated that predict an association between urbanism and nontraditional behavior. Secondary analysis of American survey data on religiosity, church attendance, and attitudes toward alcohol and birth control confirm the general urbanism-deviance association and suggest the accuracy of the model which regards such behavior as due…
Inferential ecosystem models, from network data to prediction
James S. Clark; Pankaj Agarwal; David M. Bell; Paul G. Flikkema; Alan Gelfand; Xuanlong Nguyen; Eric Ward; Jun Yang
2011-01-01
Recent developments suggest that predictive modeling could begin to play a larger role not only for data analysis, but also for data collection. We address the example of efficient wireless sensor networks, where inferential ecosystem models can be used to weigh the value of an observation against the cost of data collection. Transmission costs make observations ââ...
PESTEL Model Analysis and Legal Guarantee of Tourism Environmental Protection in China
NASA Astrophysics Data System (ADS)
Zhiyong, Xian
2017-08-01
On the basis of summarizing the general situation of tourism environmental protection in China, this paper analyses the macro factors of tourism environmental protection by using PESTEL model. On this basis, this paper explores the improvement paths of tourism environmental protection based on PESTEL model. Finally, it puts forward the legal guarantee suggestion of tourism environment protection.
Longitudinal Factor Structure of Posttraumatic Stress Symptoms Related to Intimate Partner Violence
ERIC Educational Resources Information Center
Krause, Elizabeth D.; Kaltman, Stacey; Goodman, Lisa A.; Dutton, Mary Ann
2007-01-01
Confirmatory factor analysis (CFA) studies have suggested that a model of posttraumatic stress disorder (PTSD) that is characterized by 4 factors is preferable to competing models. However, the composition of these 4 factors has varied across studies, with 1 model splitting avoidance and numbing symptoms (e.g., D. W. King, G. A. Leskin, L. A.…
Ford, W; King, K; Williams, M; Williams, J; Fausey, N
2015-07-01
Numerical modeling is an economical and feasible approach for quantifying the effects of best management practices on dissolved reactive phosphorus (DRP) loadings from agricultural fields. However, tools that simulate both surface and subsurface DRP pathways are limited and have not been robustly evaluated in tile-drained landscapes. The objectives of this study were to test the ability of the Agricultural Policy/Environmental eXtender (APEX), a widely used field-scale model, to simulate surface and tile P loadings over management, hydrologic, biologic, tile, and soil gradients and to better understand the behavior of P delivery at the edge-of-field in tile-drained midwestern landscapes. To do this, a global, variance-based sensitivity analysis was performed, and model outputs were compared with measured P loads obtained from 14 surface and subsurface edge-of-field sites across central and northwestern Ohio. Results of the sensitivity analysis showed that response variables for DRP were highly sensitive to coupled interactions between presumed important parameters, suggesting nonlinearity of DRP delivery at the edge-of-field. Comparison of model results to edge-of-field data showcased the ability of APEX to simulate surface and subsurface runoff and the associated DRP loading at monthly to annual timescales; however, some high DRP concentrations and fluxes were not reflected in the model, suggesting the presence of preferential flow. Results from this study provide new insights into baseline tile DRP loadings that exceed thresholds for algal proliferation. Further, negative feedbacks between surface and subsurface DRP delivery suggest caution is needed when implementing DRP-based best management practices designed for a specific flow pathway. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Beyond factor analysis: Multidimensionality and the Parkinson's Disease Sleep Scale-Revised.
Pushpanathan, Maria E; Loftus, Andrea M; Gasson, Natalie; Thomas, Meghan G; Timms, Caitlin F; Olaithe, Michelle; Bucks, Romola S
2018-01-01
Many studies have sought to describe the relationship between sleep disturbance and cognition in Parkinson's disease (PD). The Parkinson's Disease Sleep Scale (PDSS) and its variants (the Parkinson's disease Sleep Scale-Revised; PDSS-R, and the Parkinson's Disease Sleep Scale-2; PDSS-2) quantify a range of symptoms impacting sleep in only 15 items. However, data from these scales may be problematic as included items have considerable conceptual breadth, and there may be overlap in the constructs assessed. Multidimensional measurement models, accounting for the tendency for items to measure multiple constructs, may be useful more accurately to model variance than traditional confirmatory factor analysis. In the present study, we tested the hypothesis that a multidimensional model (a bifactor model) is more appropriate than traditional factor analysis for data generated by these types of scales, using data collected using the PDSS-R as an exemplar. 166 participants diagnosed with idiopathic PD participated in this study. Using PDSS-R data, we compared three models: a unidimensional model; a 3-factor model consisting of sub-factors measuring insomnia, motor symptoms and obstructive sleep apnoea (OSA) and REM sleep behaviour disorder (RBD) symptoms; and, a confirmatory bifactor model with both a general factor and the same three sub-factors. Only the confirmatory bifactor model achieved satisfactory model fit, suggesting that PDSS-R data are multidimensional. There were differential associations between factor scores and patient characteristics, suggesting that some PDSS-R items, but not others, are influenced by mood and personality in addition to sleep symptoms. Multidimensional measurement models may also be a helpful tool in the PDSS and the PDSS-2 scales and may improve the sensitivity of these instruments.
Wilkin, John L.; Rosenfeld, Leslie; Allen, Arthur; Baltes, Rebecca; Baptista, Antonio; He, Ruoying; Hogan, Patrick; Kurapov, Alexander; Mehra, Avichal; Quintrell, Josie; Schwab, David; Signell, Richard; Smith, Jane
2017-01-01
This paper outlines strategies that would advance coastal ocean modelling, analysis and prediction as a complement to the observing and data management activities of the coastal components of the US Integrated Ocean Observing System (IOOS®) and the Global Ocean Observing System (GOOS). The views presented are the consensus of a group of US-based researchers with a cross-section of coastal oceanography and ocean modelling expertise and community representation drawn from Regional and US Federal partners in IOOS. Priorities for research and development are suggested that would enhance the value of IOOS observations through model-based synthesis, deliver better model-based information products, and assist the design, evaluation, and operation of the observing system itself. The proposed priorities are: model coupling, data assimilation, nearshore processes, cyberinfrastructure and model skill assessment, modelling for observing system design, evaluation and operation, ensemble prediction, and fast predictors. Approaches are suggested to accomplish substantial progress in a 3–8-year timeframe. In addition, the group proposes steps to promote collaboration between research and operations groups in Regional Associations, US Federal Agencies, and the international ocean research community in general that would foster coordination on scientific and technical issues, and strengthen federal–academic partnerships benefiting IOOS stakeholders and end users.
Surface Winds and Dust Biases in Climate Models
NASA Astrophysics Data System (ADS)
Evan, A. T.
2018-01-01
An analysis of North African dust from models participating in the Fifth Climate Models Intercomparison Project (CMIP5) suggested that, when forced by observed sea surface temperatures, these models were unable to reproduce any aspects of the observed year-to-year variability in dust from North Africa. Consequently, there would be little reason to have confidence in the models' projections of changes in dust over the 21st century. However, no subsequent study has elucidated the root causes of the disagreement between CMIP5 and observed dust. Here I develop an idealized model of dust emission and then use this model to show that, over North Africa, such biases in CMIP5 models are due to errors in the surface wind fields and not due to the representation of dust emission processes. These results also suggest that because the surface wind field over North Africa is highly spatially autocorrelated, intermodel differences in the spatial structure of dust emission have little effect on the relative change in year-to-year dust emission over the continent. I use these results to show that similar biases in North African dust from the NASA Modern Era Retrospective analysis for Research and Applications (MERRA) version 2 surface wind field biases but that these wind biases were not present in the first version of MERRA.
Optimal control analysis of Ebola disease with control strategies of quarantine and vaccination.
Ahmad, Muhammad Dure; Usman, Muhammad; Khan, Adnan; Imran, Mudassar
2016-07-13
The 2014 Ebola epidemic is the largest in history, affecting multiple countries in West Africa. Some isolated cases were also observed in other regions of the world. In this paper, we introduce a deterministic SEIR type model with additional hospitalization, quarantine and vaccination components in order to understand the disease dynamics. Optimal control strategies, both in the case of hospitalization (with and without quarantine) and vaccination are used to predict the possible future outcome in terms of resource utilization for disease control and the effectiveness of vaccination on sick populations. Further, with the help of uncertainty and sensitivity analysis we also have identified the most sensitive parameters which effectively contribute to change the disease dynamics. We have performed mathematical analysis with numerical simulations and optimal control strategies on Ebola virus models. We used dynamical system tools with numerical simulations and optimal control strategies on our Ebola virus models. The original model, which allowed transmission of Ebola virus via human contact, was extended to include imperfect vaccination and quarantine. After the qualitative analysis of all three forms of Ebola model, numerical techniques, using MATLAB as a platform, were formulated and analyzed in detail. Our simulation results support the claims made in the qualitative section. Our model incorporates an important component of individuals with high risk level with exposure to disease, such as front line health care workers, family members of EVD patients and Individuals involved in burial of deceased EVD patients, rather than the general population in the affected areas. Our analysis suggests that in order for R 0 (i.e., the basic reproduction number) to be less than one, which is the basic requirement for the disease elimination, the transmission rate of isolated individuals should be less than one-fourth of that for non-isolated ones. Our analysis also predicts, we need high levels of medication and hospitalization at the beginning of an epidemic. Further, optimal control analysis of the model suggests the control strategies that may be adopted by public health authorities in order to reduce the impact of epidemics like Ebola.
TWave: High-Order Analysis of Functional MRI
Barnathan, Michael; Megalooikonomou, Vasileios; Faloutsos, Christos; Faro, Scott; Mohamed, Feroze B.
2011-01-01
The traditional approach to functional image analysis models images as matrices of raw voxel intensity values. Although such a representation is widely utilized and heavily entrenched both within neuroimaging and in the wider data mining community, the strong interactions among space, time, and categorical modes such as subject and experimental task inherent in functional imaging yield a dataset with “high-order” structure, which matrix models are incapable of exploiting. Reasoning across all of these modes of data concurrently requires a high-order model capable of representing relationships between all modes of the data in tandem. We thus propose to model functional MRI data using tensors, which are high-order generalizations of matrices equivalent to multidimensional arrays or data cubes. However, several unique challenges exist in the high-order analysis of functional medical data: naïve tensor models are incapable of exploiting spatiotemporal locality patterns, standard tensor analysis techniques exhibit poor efficiency, and mixtures of numeric and categorical modes of data are very often present in neuroimaging experiments. Formulating the problem of image clustering as a form of Latent Semantic Analysis and using the WaveCluster algorithm as a baseline, we propose a comprehensive hybrid tensor and wavelet framework for clustering, concept discovery, and compression of functional medical images which successfully addresses these challenges. Our approach reduced runtime and dataset size on a 9.3 GB finger opposition motor task fMRI dataset by up to 98% while exhibiting improved spatiotemporal coherence relative to standard tensor, wavelet, and voxel-based approaches. Our clustering technique was capable of automatically differentiating between the frontal areas of the brain responsible for task-related habituation and the motor regions responsible for executing the motor task, in contrast to a widely used fMRI analysis program, SPM, which only detected the latter region. Furthermore, our approach discovered latent concepts suggestive of subject handedness nearly 100x faster than standard approaches. These results suggest that a high-order model is an integral component to accurate scalable functional neuroimaging. PMID:21729758
NoSQL Based 3D City Model Management System
NASA Astrophysics Data System (ADS)
Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.
2014-04-01
To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.
Using partial site aggregation to reduce bias in random utility travel cost models
NASA Astrophysics Data System (ADS)
Lupi, Frank; Feather, Peter M.
1998-12-01
We propose a "partial aggregation" strategy for defining the recreation sites that enter choice sets in random utility models. Under the proposal, the most popular sites and sites that will be the subject of policy analysis enter choice sets as individual sites while remaining sites are aggregated into groups of similar sites. The scheme balances the desire to include all potential substitute sites in the choice sets with practical data and modeling constraints. Unlike fully aggregate models, our analysis and empirical applications suggest that the partial aggregation approach reasonably approximates the results of a disaggregate model. The partial aggregation approach offers all of the data and computational advantages of models with aggregate sites but does not suffer from the same degree of bias as fully aggregate models.
Stability analysis of free piston Stirling engines
NASA Astrophysics Data System (ADS)
Bégot, Sylvie; Layes, Guillaume; Lanzetta, François; Nika, Philippe
2013-03-01
This paper presents a stability analysis of a free piston Stirling engine. The model and the detailed calculation of pressures losses are exposed. Stability of the machine is studied by the observation of the eigenvalues of the model matrix. Model validation based on the comparison with NASA experimental results is described. The influence of operational and construction parameters on performance and stability issues is exposed. The results show that most parameters that are beneficial for machine power seem to induce irregular mechanical characteristics with load, suggesting that self-sustained oscillations could be difficult to maintain and control.
Vasil'ev, G F
2013-01-01
Owing to methodical disadvantages, the theory of control still lacks the potential for the analysis of biological systems. To get the full benefit of the method in addition to the algorithmic model of control (as of today the only used model in the theory of control) a parametric model of control is offered to employ. The reasoning for it is explained. The approach suggested provides the possibility to use all potential of the modern theory of control for the analysis of biological systems. The cybernetic approach is shown taking a system of the rise of glucose concentration in blood as an example.
Water pollution and income relationships: A seemingly unrelated partially linear analysis
NASA Astrophysics Data System (ADS)
Pandit, Mahesh; Paudel, Krishna P.
2016-10-01
We used a seemingly unrelated partially linear model (SUPLM) to address a potential correlation between pollutants (nitrogen, phosphorous, dissolved oxygen and mercury) in an environmental Kuznets curve study. Simulation studies show that the SUPLM performs well to address potential correlation among pollutants. We find that the relationship between income and pollution follows an inverted U-shaped curve for nitrogen and dissolved oxygen and a cubic shaped curve for mercury. Model specification tests suggest that a SUPLM is better specified compared to a parametric model to study the income-pollution relationship. Results suggest a need to continually assess policy effectiveness of pollution reduction as income increases.
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1996-01-01
This paper describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind-tunnel model for application to design and analysis of flutter suppression controllers. The model is formed by combining the equations of motion for the BACT wind-tunnel model with actuator models and a model of wind-tunnel turbulence. The primary focus of this paper is the development of the equations of motion from first principles using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated using values for parameters obtained from both experiment and analysis. A unique aspect of the BACT wind-tunnel model is that it has upper- and lower-surface spoilers for active control. Comparisons with experimental frequency responses and other data show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind-tunnel model. The equations of motion developed herein have been used to assist the design and analysis of a number of flutter suppression controllers that have been successfully implemented.
The climatic effect of explosive volcanic activity: Analysis of the historical data
NASA Technical Reports Server (NTRS)
Bryson, R. A.; Goodman, B. M.
1982-01-01
By using the most complete available records of direct beam radiation and volcanic eruptions, an historical analysis of the role of the latter in modulating the former was made. A very simple fallout and dispersion model was applied to the historical chronology of explosive eruptions. The resulting time series explains about 77 percent of the radiation variance, as well as suggests that tropical and subpolar eruptions are more important than mid-latitude eruptions in their impact on the stratospheric aerosol optical depth. The simpler climatic models indicate that past hemispheric temperature can be stimulated very well with volcanic and CO2 inputs and suggest that climate forecasting will also require volcano forecasting. There is some evidence that this is possible some years in advance.
Visual modeling in an analysis of multidimensional data
NASA Astrophysics Data System (ADS)
Zakharova, A. A.; Vekhter, E. V.; Shklyar, A. V.; Pak, A. J.
2018-01-01
The article proposes an approach to solve visualization problems and the subsequent analysis of multidimensional data. Requirements to the properties of visual models, which were created to solve analysis problems, are described. As a perspective direction for the development of visual analysis tools for multidimensional and voluminous data, there was suggested an active use of factors of subjective perception and dynamic visualization. Practical results of solving the problem of multidimensional data analysis are shown using the example of a visual model of empirical data on the current state of studying processes of obtaining silicon carbide by an electric arc method. There are several results of solving this problem. At first, an idea of possibilities of determining the strategy for the development of the domain, secondly, the reliability of the published data on this subject, and changes in the areas of attention of researchers over time.
Extracting falsifiable predictions from sloppy models.
Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P
2007-12-01
Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
NASA Astrophysics Data System (ADS)
Shi, Guoliang; Peng, Xing; Huangfu, Yanqi; Wang, Wei; Xu, Jiao; Tian, Yingze; Feng, Yinchang; Ivey, Cesunica E.; Russell, Armistead G.
2017-07-01
Source apportionment technologies are used to understand the impacts of important sources of particulate matter (PM) air quality, and are widely used for both scientific studies and air quality management. Generally, receptor models apportion speciated PM data from a single sampling site. With the development of large scale monitoring networks, PM speciation are observed at multiple sites in an urban area. For these situations, the models should account for three factors, or dimensions, of the PM, including the chemical species concentrations, sampling periods and sampling site information, suggesting the potential power of a three-dimensional source apportionment approach. However, the principle of three-dimensional Parallel Factor Analysis (Ordinary PARAFAC) model does not always work well in real environmental situations for multi-site receptor datasets. In this work, a new three-way receptor model, called "multi-site three way factor analysis" model is proposed to deal with the multi-site receptor datasets. Synthetic datasets were developed and introduced into the new model to test its performance. Average absolute error (AAE, between estimated and true contributions) for extracted sources were all less than 50%. Additionally, three-dimensional ambient datasets from a Chinese mega-city, Chengdu, were analyzed using this new model to assess the application. Four factors are extracted by the multi-site WFA3 model: secondary source have the highest contributions (64.73 and 56.24 μg/m3), followed by vehicular exhaust (30.13 and 33.60 μg/m3), crustal dust (26.12 and 29.99 μg/m3) and coal combustion (10.73 and 14.83 μg/m3). The model was also compared to PMF, with general agreement, though PMF suggested a lower crustal contribution.
Kong, Angela; Vijayasiri, Ganga; Fitzgibbon, Marian L; Schiffer, Linda A; Campbell, Richard T
2015-07-01
Validation work of the Child Feeding Questionnaire (CFQ) in low-income minority samples suggests a need for further conceptual refinement of this instrument. Using confirmatory factor analysis, this study evaluated 5- and 6-factor models on a large sample of African-American and Hispanic mothers with preschool-age children (n = 962). The 5-factor model included: 'perceived responsibility', 'concern about child's weight', 'restriction', 'pressure to eat', and 'monitoring' and the 6-factor model also tested 'food as a reward'. Multi-group analysis assessed measurement invariance by race/ethnicity. In the 5-factor model, two low-loading items from 'restriction' and one low-variance item from 'perceived responsibility' were dropped to achieve fit. Only removal of the low-variance item was needed to achieve fit in the 6-factor model. Invariance analyses demonstrated differences in factor loadings. This finding suggests African-American and Hispanic mothers may vary in their interpretation of some CFQ items and use of cognitive interviews could enhance item interpretation. Our results also demonstrated that 'food as a reward' is a plausible construct among a low-income minority sample and adds to the evidence that this factor resonates conceptually with parents of preschoolers; however, further testing is needed to determine the validity of this factor with older age groups. Copyright © 2015 Elsevier Ltd. All rights reserved.
A genome-wide longitudinal transcriptome analysis of the aging model Podospora anserina.
Philipp, Oliver; Hamann, Andrea; Servos, Jörg; Werner, Alexandra; Koch, Ina; Osiewacz, Heinz D
2013-01-01
Aging of biological systems is controlled by various processes which have a potential impact on gene expression. Here we report a genome-wide transcriptome analysis of the fungal aging model Podospora anserina. Total RNA of three individuals of defined age were pooled and analyzed by SuperSAGE (serial analysis of gene expression). A bioinformatics analysis identified different molecular pathways to be affected during aging. While the abundance of transcripts linked to ribosomes and to the proteasome quality control system were found to decrease during aging, those associated with autophagy increase, suggesting that autophagy may act as a compensatory quality control pathway. Transcript profiles associated with the energy metabolism including mitochondrial functions were identified to fluctuate during aging. Comparison of wild-type transcripts, which are continuously down-regulated during aging, with those down-regulated in the long-lived, copper-uptake mutant grisea, validated the relevance of age-related changes in cellular copper metabolism. Overall, we (i) present a unique age-related data set of a longitudinal study of the experimental aging model P. anserina which represents a reference resource for future investigations in a variety of organisms, (ii) suggest autophagy to be a key quality control pathway that becomes active once other pathways fail, and (iii) present testable predictions for subsequent experimental investigations.
ERIC Educational Resources Information Center
Dishion, Thomas J.; Capaldi, Deborah M.; Yoerger, Karen
1999-01-01
This study examined antecedents to early patterned alcohol and tobacco use and marijuana experimentation between ages 11 and 16 for an at-risk male sample. Findings suggested that family, peer, and child characteristics were inextricably connected within an ecology of development. A structural equation prediction model suggested a higher order…
Van Dessel, E; Fierens, K; Pattyn, P; Van Nieuwenhove, Y; Berrevoet, F; Troisi, R; Ceelen, W
2009-01-01
Approximately 5%-20% of colorectal cancer (CRC) patients present with synchronous potentially resectable liver metastatic disease. Preclinical and clinical studies suggest a benefit of the 'liver first' approach, i.e. resection of the liver metastasis followed by resection of the primary tumour. A formal decision analysis may support a rational choice between several therapy options. Survival and morbidity data were retrieved from relevant clinical studies identified by a Web of Science search. Data were entered into decision analysis software (TreeAge Pro 2009, Williamstown, MA, USA). Transition probabilities including the risk of death from complications or disease progression associated with individual therapy options were entered into the model. Sensitivity analysis was performed to evaluate the model's validity under a variety of assumptions. The result of the decision analysis confirms the superiority of the 'liver first' approach. Sensitivity analysis demonstrated that this assumption is valid on condition that the mortality associated with the hepatectomy first is < 4.5%, and that the mortality of colectomy performed after hepatectomy is < 3.2%. The results of this decision analysis suggest that, in patients with synchronous resectable colorectal liver metastases, the 'liver first' approach is to be preferred. Randomized trials will be needed to confirm the results of this simulation based outcome.
Examining evolving performance on the Force Concept Inventory using factor analysis
NASA Astrophysics Data System (ADS)
Semak, M. R.; Dietz, R. D.; Pearson, R. H.; Willis, C. W.
2017-06-01
The application of factor analysis to the Force Concept Inventory (FCI) has proven to be problematic. Some studies have suggested that factor analysis of test results serves as a helpful tool in assessing the recognition of Newtonian concepts by students. Other work has produced at best ambiguous results. For the FCI administered as a pre- and post-test, we see factor analysis as a tool by which the changes in conceptual associations made by our students may be gauged given the evolution of their response patterns. This analysis allows us to identify and track conceptual linkages, affording us insight as to how our students have matured due to instruction. We report on our analysis of 427 pre- and post-tests. The factor models for the pre- and post-tests are explored and compared along with the methodology by which these models were fit to the data. The post-test factor pattern is more aligned with an expert's interpretation of the questions' content, as it allows for a more readily identifiable relationship between factors and physical concepts. We discuss this evolution in the context of approaching the characteristics of an expert with force concepts. Also, we find that certain test items do not significantly contribute to the pre- or post-test factor models and attempt explanations as to why this is so. This may suggest that such questions may not be effective in probing the conceptual understanding of our students.
A stylistic classification of Russian-language texts based on the random walk model
NASA Astrophysics Data System (ADS)
Kramarenko, A. A.; Nekrasov, K. A.; Filimonov, V. V.; Zhivoderov, A. A.; Amieva, A. A.
2017-09-01
A formal approach to text analysis is suggested that is based on the random walk model. The frequencies and reciprocal positions of the vowel letters are matched up by a process of quasi-particle migration. Statistically significant difference in the migration parameters for the texts of different functional styles is found. Thus, a possibility of classification of texts using the suggested method is demonstrated. Five groups of the texts are singled out that can be distinguished from one another by the parameters of the quasi-particle migration process.
Analysis of multigrid methods on massively parallel computers: Architectural implications
NASA Technical Reports Server (NTRS)
Matheson, Lesley R.; Tarjan, Robert E.
1993-01-01
We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.
Syed, Khajamohiddin; Shale, Karabo; Pagadala, Nataraj Sekhar; Tuszynski, Jack
2014-01-01
Genome sequencing of basidiomycetes, a group of fungi capable of degrading/mineralizing plant material, revealed the presence of numerous cytochrome P450 monooxygenases (P450s) in their genomes, with some exceptions. Considering the large repertoire of P450s found in fungi, it is difficult to identify P450s that play an important role in fungal metabolism and the adaptation of fungi to diverse ecological niches. In this study, we followed Sir Charles Darwin’s theory of natural selection to identify such P450s in model basidiomycete fungi showing a preference for different types of plant components degradation. Any P450 family comprising a large number of member P450s compared to other P450 families indicates its natural selection over other P450 families by its important role in fungal physiology. Genome-wide comparative P450 analysis in the basidiomycete species, Phanerochaete chrysosporium, Phanerochaete carnosa, Agaricus bisporus, Postia placenta, Ganoderma sp. and Serpula lacrymans, revealed enrichment of 11 P450 families (out of 68 P450 families), CYP63, CYP512, CYP5035, CYP5037, CYP5136, CYP5141, CYP5144, CYP5146, CYP5150, CYP5348 and CYP5359. Phylogenetic analysis of the P450 family showed species-specific alignment of P450s across the P450 families with the exception of P450s of Phanerochaete chrysosporium and Phanerochaete carnosa, suggesting paralogous evolution of P450s in model basidiomycetes. P450 gene-structure analysis revealed high conservation in the size of exons and the location of introns. P450s with the same gene structure were found tandemly arranged in the genomes of selected fungi. This clearly suggests that extensive gene duplications, particularly tandem gene duplications, led to the enrichment of selective P450 families in basidiomycetes. Functional analysis and gene expression profiling data suggest that members of the P450 families are catalytically versatile and possibly involved in fungal colonization of plant material. To our knowledge, this is the first report on the identification and comparative-evolutionary analysis of P450 families enriched in model basidiomycetes. PMID:24466198
A channel dynamics model for real-time flood forecasting
Hoos, Anne B.; Koussis, Antonis D.; Beale, Guy O.
1989-01-01
A new channel dynamics scheme (alternative system predictor in real time (ASPIRE)), designed specifically for real-time river flow forecasting, is introduced to reduce uncertainty in the forecast. ASPIRE is a storage routing model that limits the influence of catchment model forecast errors to the downstream station closest to the catchment. Comparisons with the Muskingum routing scheme in field tests suggest that the ASPIRE scheme can provide more accurate forecasts, probably because discharge observations are used to a maximum advantage and routing reaches (and model errors in each reach) are uncoupled. Using ASPIRE in conjunction with the Kalman filter did not improve forecast accuracy relative to a deterministic updating procedure. Theoretical analysis suggests that this is due to a large process noise to measurement noise ratio.
Nanoengineering Testbed for Nanosolar Cell and Piezoelectric Compounds
2012-02-29
element mesh. The third model was a 3D finite element mesh that included complete geometric representation of Berkovich tip. This model allows for a...height of the specimen. These simulations suggest the proper specimen size to approximate a body of semi-infinite extent for a given indentation depth...tip nanoindentation model was the third and final finite element mesh created for analysis and comparison. The material model and the finite element
Modeling Choice Under Uncertainty in Military Systems Analysis
1991-11-01
operators rather than fuzzy operators. This is suggested for further research. 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) In AHP , objectives, functions and...14 4.1 IMPRECISELY SPECIFIED MULTIPLE A’ITRIBUTE UTILITY THEORY... 14 4.2 FUZZY DECISION ANALYSIS...14 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) ................................... 14 4.4 SUBJECTIVE TRANSFER FUNCTION APPROACH
Audience Diversion Due to Cable Television: A Statistical Analysis of New Data.
ERIC Educational Resources Information Center
Park, Rolla Edward
A statistical analysis of new data suggests that television broadcasting will continue to prosper, despite increasing competition from cable television carrying distant signals. Data on cable and non-cable audiences in 121 counties with well defined signal choice support generalized least squares estimates of two models: total audience and…
ERIC Educational Resources Information Center
Yorek, Nurettin; Ugulu, Ilker
2015-01-01
In this study, artificial neural networks are suggested as a model that can be "trained" to yield qualitative results out of a huge amount of categorical data. It can be said that this is a new approach applied in educational qualitative data analysis. In this direction, a cascade-forward back-propagation neural network (CFBPN) model was…
Enfield, Kyle B; Schafer, Katherine; Zlupko, Mike; Herasevich, Vitaly; Novicoff, Wendy M; Gajic, Ognjen; Hoke, Tracey R; Truwit, Jonathon D
2012-01-01
Hospitals are increasingly compared based on clinical outcomes adjusted for severity of illness. Multiple methods exist to adjust for differences between patients. The challenge for consumers of this information, both the public and healthcare providers, is interpreting differences in risk adjustment models particularly when models differ in their use of administrative and physiologic data. We set to examine how administrative and physiologic models compare to each when applied to critically ill patients. We prospectively abstracted variables for a physiologic and administrative model of mortality from two intensive care units in the United States. Predicted mortality was compared through the Pearsons Product coefficient and Bland-Altman analysis. A subgroup of patients admitted directly from the emergency department was analyzed to remove potential confounding changes in condition prior to ICU admission. We included 556 patients from two academic medical centers in this analysis. The administrative model and physiologic models predicted mortalities for the combined cohort were 15.3% (95% CI 13.7%, 16.8%) and 24.6% (95% CI 22.7%, 26.5%) (t-test p-value<0.001). The r(2) for these models was 0.297. The Bland-Atlman plot suggests that at low predicted mortality there was good agreement; however, as mortality increased the models diverged. Similar results were found when analyzing a subgroup of patients admitted directly from the emergency department. When comparing the two hospitals, there was a statistical difference when using the administrative model but not the physiologic model. Unexplained mortality, defined as those patients who died who had a predicted mortality less than 10%, was a rare event by either model. In conclusion, while it has been shown that administrative models provide estimates of mortality that are similar to physiologic models in non-critically ill patients with pneumonia, our results suggest this finding can not be applied globally to patients admitted to intensive care units. As patients and providers increasingly use publicly reported information in making health care decisions and referrals, it is critical that the provided information be understood. Our results suggest that severity of illness may influence the mortality index in administrative models. We suggest that when interpreting "report cards" or metrics, health care providers determine how the risk adjustment was made and compares to other risk adjustment models.
Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,
2013-01-01
Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.
Using Two Models in Optics: Students' Difficulties and Suggestions for Teaching.
ERIC Educational Resources Information Center
Colin, P.; Viennot, L.
2001-01-01
Focuses on difficulties linked to situations in physics involving two models--geometrical optics and wave optics. Presents content analysis underlining two important features required for addressing such situations: (1) awareness of the status of the drawings; and (2) the 'backward selection' of paths of light. (Contains 24 references.)…
The Impact of Childhood Cancer: A Two-Factor Model of Coping.
ERIC Educational Resources Information Center
Zevon, Michael A.; Armstrong, Gordon D.
A review of existing stress and coping models and an analysis of the distress caused by childhood cancer suggest that a broader conceptualization of coping that includes "pleasure management" is needed. Presently, successful coping is identified as the employment of strategies which allow the individual to adapt to stress. Traditional…
Characters and Episodes that Provide Models for Middle School Writers
ERIC Educational Resources Information Center
Pelttari, Carole
2012-01-01
While conducting a content analysis of award-winning, middle school fiction, I indentified a number of episodes and characters that might be used as models for students' writing. Research suggests that teachers can motivate students (Bruning & Horn, 2000; Codling, Gambrell, Kennedy, Palmer, & Graham, 1996) to respond to character-writers (Van…
A Social Psychological Model for Predicting Sexual Harassment.
ERIC Educational Resources Information Center
Pryor, John B.; And Others
1995-01-01
Presents a Person X Situation (PXS) model of sexual harassment suggesting that sexually harassing behavior may be predicted from an analysis of social situational and personal factors. Research on sexual harassment proclivities in men is reviewed, and a profile of men who have a high a likelihood to sexually harass is discussed. Possible PXS…
Impact of Facebook Usage on Students' Academic Achievement: Role of Self-Regulation and Trust
ERIC Educational Resources Information Center
Rouis, Sana; Limayem, Moez; Salehi-Sangari, Esmail
2011-01-01
Introduction: The paper provides a preliminary analysis of the effects of Facebook usage by undergraduate students at Lulea University of Technology in Sweden. The proposed research model tests the perceived effect of personality traits, self-regulation, and trust on students' achievements. Based on flow theory, the model suggests negative…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young-Mo; Schmidt, Brian; Kidwai, Afshan S.
Salmonella enterica serovar Typhimurium (S. Typhimurium) is a facultative pathogen that uses complex mechanisms to invade and proliferate within mammalian host cells. To investigate possible contributions of metabolic processes in S. Typhimurium grown under conditions known to induce expression of virulence genes, we used a metabolomics-driven systems biology approach coupled with genome scale modeling. First, we identified distinct metabolite profiles associated with bacteria grown in either rich or virulence-inducing media and report the most comprehensive coverage of the S. Typhimurium metabolome to date. Second, we applied an omics-informed genome scale modeling analysis of the functional consequences of adaptive alterations inmore » S. Typhimurium metabolism during growth under our conditions. Excitingly, we observed possible sequestration of metabolites recently suggested to have immune modulating roles. Modeling efforts highlighted a decreased cellular capability to both produce and utilize intracellular amino acids during stationary phase culture in virulence conditions, despite significant abundance increases for these molecules as observed by our metabolomics measurements. Model-guided analysis suggested that alterations in metabolism prioritized other activities necessary for pathogenesis instead, such as lipopolysaccharide biosynthesis.« less
NASA Astrophysics Data System (ADS)
Singh, Upendra K.; Tiwari, R. K.; Singh, S. B.
2013-03-01
This paper presents the effects of several parameters on the artificial neural networks (ANN) inversion of vertical electrical sounding (VES) data. Sensitivity of ANN parameters was examined on the performance of adaptive backpropagation (ABP) and Levenberg-Marquardt algorithms (LMA) to test the robustness to noisy synthetic as well as field geophysical data and resolving capability of these methods for predicting the subsurface resistivity layers. We trained, tested and validated ANN using the synthetic VES data as input to the networks and layer parameters of the models as network output. ANN learning parameters are varied and corresponding observations are recorded. The sensitivity analysis of synthetic data and real model demonstrate that ANN algorithms applied in VES data inversion should be considered well not only in terms of accuracy but also in terms of high computational efforts. Also the analysis suggests that ANN model with its various controlling parameters are largely data dependent and hence no unique architecture can be designed for VES data analysis. ANN based methods are also applied to the actual VES field data obtained from the tectonically vital geothermal areas of Jammu and Kashmir, India. Analysis suggests that both the ABP and LMA are suitable methods for 1-D VES modeling. But the LMA method provides greater degree of robustness than the ABP in case of 2-D VES modeling. Comparison of the inversion results with known lithology correlates well and also reveals the additional significant feature of reconsolidated breccia of about 7.0 m thickness beneath the overburden in some cases like at sounding point RDC-5. We may therefore conclude that ANN based methods are significantly faster and efficient for detection of complex layered resistivity structures with a relatively greater degree of precision and resolution.
Estimating species richness and accumulation by modeling species occurrence and detectability
Dorazio, R.M.; Royle, J. Andrew; Soderstrom, B.; Glimskarc, A.
2006-01-01
A statistical model is developed for estimating species richness and accumulation by formulating these community-level attributes as functions of model-based estimators of species occurrence while accounting for imperfect detection of individual species. The model requires a sampling protocol wherein repeated observations are made at a collection of sample locations selected to be representative of the community. This temporal replication provides the data needed to resolve the ambiguity between species absence and nondetection when species are unobserved at sample locations. Estimates of species richness and accumulation are computed for two communities, an avian community and a butterfly community. Our model-based estimates suggest that detection failures in many bird species were attributed to low rates of occurrence, as opposed to simply low rates of detection. We estimate that the avian community contains a substantial number of uncommon species and that species richness greatly exceeds the number of species actually observed in the sample. In fact, predictions of species accumulation suggest that even doubling the number of sample locations would not have revealed all of the species in the community. In contrast, our analysis of the butterfly community suggests that many species are relatively common and that the estimated richness of species in the community is nearly equal to the number of species actually detected in the sample. Our predictions of species accumulation suggest that the number of sample locations actually used in the butterfly survey could have been cut in half and the asymptotic richness of species still would have been attained. Our approach of developing occurrence-based summaries of communities while allowing for imperfect detection of species is broadly applicable and should prove useful in the design and analysis of surveys of biodiversity.
Inside the Green House "Black Box": Opportunities for High-Quality Clinical Decision Making.
Bowers, Barbara; Roberts, Tonya; Nolet, Kimberly; Ryther, Brenda
2016-02-01
To develop a conceptual model that explained common and divergent care processes in Green House (GH) nursing homes with high and low hospital transfer rates. Eighty-four face-to-face, semistructured interviews were conducted with direct care, professional, and administrative staff with knowledge of care processes in six GH organizations in six states. The qualitative grounded theory method was used for data collection and analysis. Data were analyzed using open, axial, and selective coding. Data collection and analysis occurred iteratively. Elements of the GH model created significant opportunities to identify, communicate, and respond to early changes in resident condition. Staff in GH homes with lower hospital transfer rates employed care processes that maximized these opportunities. Staff in GH homes with higher transfer rates failed to maximize, or actively undermined, these opportunities. Variations in how the GH model was implemented across GH homes suggest possible explanations for inconsistencies found in past research on the care outcomes, including hospital transfer rates, in culture change models. The findings further suggest that the details of culture change implementation are important considerations in model replication and policies that create incentives for care improvements. © Health Research and Educational Trust.
Bus accident analysis of routes with/without bus priority.
Goh, Kelvin Chun Keong; Currie, Graham; Sarvi, Majid; Logan, David
2014-04-01
This paper summarises findings on road safety performance and bus-involved accidents in Melbourne along roads where bus priority measures had been applied. Results from an empirical analysis of the accident types revealed significant reduction in the proportion of accidents involving buses hitting stationary objects and vehicles, which suggests the effect of bus priority in addressing manoeuvrability issues for buses. A mixed-effects negative binomial (MENB) regression and back-propagation neural network (BPNN) modelling of bus accidents considering wider influences on accident rates at a route section level also revealed significant safety benefits when bus priority is provided. Sensitivity analyses done on the BPNN model showed general agreement in the predicted accident frequency between both models. The slightly better performance recorded by the MENB model results suggests merits in adopting a mixed effects modelling approach for accident count prediction in practice given its capability to account for unobserved location and time-specific factors. A major implication of this research is that bus priority in Melbourne's context acts to improve road safety and should be a major consideration for road management agencies when implementing bus priority and road schemes. Copyright © 2013 Elsevier Ltd. All rights reserved.
System Dynamics Modeling for Supply Chain Information Sharing
NASA Astrophysics Data System (ADS)
Feng, Yang
In this paper, we try to use the method of system dynamics to model supply chain information sharing. Firstly, we determine the model boundaries, establish system dynamics model of supply chain before information sharing, analyze the model's simulation results under different changed parameters and suggest improvement proposal. Then, we establish system dynamics model of supply chain information sharing and make comparison and analysis on the two model's simulation results, to show the importance of information sharing in supply chain management. We wish that all these simulations would provide scientific supports for enterprise decision-making.
Applying the take-grant protection model
NASA Technical Reports Server (NTRS)
Bishop, Matt
1990-01-01
The Take-Grant Protection Model has in the past been used to model multilevel security hierarchies and simple protection systems. The models are extended to include theft of rights and sharing information, and additional security policies are examined. The analysis suggests that in some cases the basic rules of the Take-Grant Protection Model should be augmented to represent the policy properly; when appropriate, such modifications are made and their efforts with respect to the policy and its Take-Grant representation are discussed.
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Rives, T. B.
1987-01-01
An analytical analysis of the HOSC Generic Peripheral processing system was conducted. The results are summarized and they indicate that the maximum delay in performing screen change requests should be less than 2.5 sec., occurring for a slow VAX host to video screen I/O rate of 50 KBps. This delay is due to the average I/O rate from the video terminals to their host computer. Software structure of the main computers and the host computers will have greater impact on screen change or refresh response times. The HOSC data system model was updated by a newly coded PASCAL based simulation program which was installed on the HOSC VAX system. This model is described and documented. Suggestions are offered to fine tune the performance of the ETERNET interconnection network. Suggestions for using the Nutcracker by Excelan to trace itinerate packets which appear on the network from time to time were offered in discussions with the HOSC personnel. Several visits to the HOSC facility were to install and demonstrate the simulation model.
NASA Astrophysics Data System (ADS)
Tian, F.; Sivapalan, M.; Li, H.; Hu, H.
2007-12-01
The importance of diagnostic analysis of hydrological models is increasingly recognized by the scientific community (M. Sivapalan, et al., 2003; H. V. Gupta, et al., 2007). Model diagnosis refers to model structures and parameters being identified not only by statistical comparison of system state variables and outputs but also by process understanding in a specific watershed. Process understanding can be gained by the analysis of observational data and model results at the specific watershed as well as through regionalization. Although remote sensing technology can provide valuable data about the inputs, state variables, and outputs of the hydrological system, observational rainfall-runoff data still constitute the most accurate, reliable, direct, and thus a basic component of hydrology related database. One critical question in model diagnostic analysis is, therefore, what signature characteristic can we extract from rainfall and runoff data. To this date only a few studies have focused on this question, such as Merz et al. (2006) and Lana-Renault et al. (2007), still none of these studies related event analysis with model diagnosis in an explicit, rigorous, and systematic manner. Our work focuses on the identification of the dominant runoff generation mechanisms from event analysis of rainfall-runoff data, including correlation analysis and analysis of timing pattern. The correlation analysis involves the identification of the complex relationship among rainfall depth, intensity, runoff coefficient, and antecedent conditions, and the timing pattern analysis aims to identify the clustering pattern of runoff events in relation to the patterns of rainfall events. Our diagnostic analysis illustrates the changing pattern of runoff generation mechanisms in the DMIP2 test watersheds located in Oklahoma region, which is also well recognized by numerical simulations based on TsingHua Representative Elementary Watershed (THREW) model. The result suggests the usefulness of rainfall-runoff event analysis for model development as well as model diagnostics.
Eco-Advertising: Taking a Closer Look
ERIC Educational Resources Information Center
Ritz, William C.
1975-01-01
Offers an approach to the analysis of ecology oriented advertising based on the Essentia Project of the American Geological Institute (ESSENCE) model. Suggests several classroom activities to be done with this topic. (CP)
The declared barriers of the large developing countries waste management projects: The STAR model.
Bufoni, André Luiz; Oliveira, Luciano Basto; Rosa, Luiz Pinguelli
2016-06-01
The aim of this study is to investigate and describe the barriers system that precludes the feasibility, or limits the performance of the waste management projects through the analysis of which are the declared barriers at the 432 large waste management projects registered as CDM during the period 2004-2014. The final product is a waste management barriers conceptual model proposal (STAR), supported by literature and corroborated by projects design documents. This paper uses the computer assisted qualitative content analysis (CAQCA) methodology with the qualitative data analysis (QDA) software NVivo®, by 890 fragments, to investigate the motives to support our conclusions. Results suggest the main barriers classification in five types: sociopolitical, technological, regulatory, financial, and human resources constraints. Results also suggest that beyond the waste management industry, projects have disadvantages added related to the same barriers inherent to others renewable energies initiatives. The STAR model sheds some light over the interactivity and dynamics related to the main constraints of the industry, describing the mutual influences and relationships among each one. Future researches are needed to better and comprehensively understand these relationships and ease the development of tools to alleviate or eliminate them. Copyright © 2016 Elsevier Ltd. All rights reserved.
A general health policy model: update and applications.
Kaplan, R M; Anderson, J P
1988-01-01
This article describes the development of a General Health Policy Model that can be used for program evaluation, population monitoring, clinical research, and policy analysis. An important component of the model, the Quality of Well-being scale (QWB) combines preference-weighted measures of symptoms and functioning to provide a numerical point-in-time expression of well-being, ranging from 0 for death to 1.0 for asymptomatic optimum functioning. The level of wellness at particular points in time is governed by the prognosis (transition rates or probabilities) generated by the underlying disease or injury under different treatment (control) variables. Well-years result from integrating the level of wellness, or health-related quality of life, over the life expectancy. Several issues relevant to the application of the model are discussed. It is suggested that a quality of life measure need not have separate components for social and mental health. Social health has been difficult to define; social support may be a poor criterion for resource allocation; and some evidence suggests that aspects of mental health are captured by the general measure. Although it has been suggested that measures of child health should differ from those used for adults, we argue that a separate conceptualization of child health creates new problems for policy analysis. After offering several applications of the model for the evaluation of prevention programs, we conclude that many of the advantages of general measures have been overlooked and should be given serious consideration in future studies. PMID:3384669
Uncertainty analysis of hydrological modeling in a tropical area using different algorithms
NASA Astrophysics Data System (ADS)
Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh
2018-01-01
Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor <0.56 and R 2>0.91, NSE>0.89, and 0.18
Automatic computation for optimum height planning of apartment buildings to improve solar access
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seong, Yoon-Bok; Kim, Yong-Yee; Seok, Ho-Tae
2011-01-15
The objective of this study is to suggest a mathematical model and an optimal algorithm for determining the height of apartment buildings to satisfy the solar rights of survey buildings or survey housing units. The objective is also to develop an automatic computation model for the optimum height of apartment buildings and then to clarify the performance and expected effects. To accomplish the objective of this study, the following procedures were followed: (1) The necessity of the height planning of obstruction buildings to satisfy the solar rights of survey buildings or survey housing units is demonstrated by analyzing through amore » literature review the recent trend of disputes related to solar rights and to examining the social requirements in terms of solar rights. In addition, the necessity of the automatic computation system for height planning of apartment buildings is demonstrated and a suitable analysis method for this system is chosen by investigating the characteristics of analysis methods for solar rights assessment. (2) A case study on the process of height planning of apartment buildings will be briefly described and the problems occurring in this process will then be examined carefully. (3) To develop an automatic computation model for height planning of apartment buildings, geometrical elements forming apartment buildings are defined by analyzing the geometrical characteristics of apartment buildings. In addition, design factors and regulations required in height planning of apartment buildings are investigated. Based on this knowledge, the methodology and mathematical algorithm to adjust the height of apartment buildings by automatic computation are suggested and probable problems and the ways to resolve these problems are discussed. Finally, the methodology and algorithm for the optimization are suggested. (4) Based on the suggested methodology and mathematical algorithm, the automatic computation model for optimum height of apartment buildings is developed and the developed system is verified through the application of some cases. The effects of the suggested model are then demonstrated quantitatively and qualitatively. (author)« less
Evaluating Sustainability Models for Interoperability through Brokering Software
NASA Astrophysics Data System (ADS)
Pearlman, Jay; Benedict, Karl; Best, Mairi; Fyfe, Sue; Jacobs, Cliff; Michener, William; Nativi, Stefano; Powers, Lindsay; Turner, Andrew
2016-04-01
Sustainability of software and research support systems is an element of innovation that is not often discussed. Yet, sustainment is essential if we expect research communities to make the time investment to learn and adopt new technologies. As the Research Data Alliance (RDA) is developing new approaches to interoperability, the question of uptake and sustainability is important. Brokering software sustainability is one of the areas that is being addressed in RDA. The Business Models Team of the Research Data Alliance Brokering Governance Working Group examined several support models proposed to promote the long-term sustainability of brokering middleware. The business model analysis includes examination of funding source, implementation frameworks and challenges, and policy and legal considerations. Results of this comprehensive analysis highlight advantages and disadvantages of the various models with respect to the specific requirements for brokering services. We offer recommendations based on the outcomes of this analysis that suggest that hybrid funding models present the most likely avenue to long term sustainability.
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
Sequential Exposure of Bortezomib and Vorinostat is Synergistic in Multiple Myeloma Cells
Nanavati, Charvi; Mager, Donald E.
2018-01-01
Purpose To examine the combination of bortezomib and vorinostat in multiple myeloma cells (U266) and xenografts, and to assess the nature of their potential interactions with semi-mechanistic pharmacodynamic models and biomarkers. Methods U266 proliferation was examined for a range of bortezomib and vorinostat exposure times and concentrations (alone and in combination). A non-competitive interaction model was used with interaction parameters that reflect the nature of drug interactions after simultaneous and sequential exposures. p21 and cleaved PARP were measured using immunoblotting to assess critical biomarker dynamics. For xenografts, data were extracted from literature and modeled with a PK/PD model with an interaction parameter. Results Estimated model parameters for simultaneous in vitro and xenograft treatments suggested additive drug effects. The sequence of bortezomib preincubation for 24 hours, followed by vorinostat for 24 hours, resulted in an estimated interaction term significantly less than 1, suggesting synergistic effects. p21 and cleaved PARP were also up-regulated the most in this sequence. Conclusions Semi-mechanistic pharmacodynamic modeling suggests synergistic pharmacodynamic interactions for the sequential administration of bortezomib followed by vorinostat. Increased p21 and cleaved PARP expression can potentially explain mechanisms of their enhanced effects, which require further PK/PD systems analysis to suggest an optimal dosing regimen. PMID:28101809
Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D
2016-01-01
Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Application of NASTRAN to propeller-induced ship vibration
NASA Technical Reports Server (NTRS)
Liepins, A. A.; Conaway, J. H.
1975-01-01
An application of the NASTRAN program to the analysis of propeller-induced ship vibration is presented. The essentials of the model, the computational procedure, and experience are described. Desirable program enhancements are suggested.
Li, Huiyi; Dou, Huanjing; Zhang, Yuhai; Li, Zhigang; Wang, Ruiyong; Chang, Junbiao
2015-02-05
FNC (2'-deoxy-2'-bfluoro-4'-azidocytidine) is a novel nucleoside analogue with pharmacologic effects on several human diseases. In this work, the binding of FNC to human hemoglobin (HHb) have been investigated by absorption spectroscopy, fluorescence quenching technique, synchronous fluorescence, three-dimensional fluorescence and molecular modeling methods. Analysis of fluorescence data showed that the binding of FNC to HHb occurred via a static quenching mechanism. Thermodynamic analysis and molecular modeling suggest that hydrogen bond and van der Waals force are the mainly binding force in the binding of FNC to HHb. Copyright © 2014 Elsevier B.V. All rights reserved.
Gibbons, Chris J; Thornton, Everard W; Ealing, John; Shaw, Pamela J; Talbot, Kevin; Tennant, Alan; Young, Carolyn A
2013-11-15
Social withdrawal is described as the condition in which an individual experiences a desire to make social contact, but is unable to satisfy that desire. It is an important issue for patients with motor neurone disease who are likely to experience severe physical impairment. This study aims to reassess the psychometric and scaling properties of the MND Social Withdrawal Scale (MND-SWS) domains and examine the feasibility of a summary scale, by applying scale data to the Rasch model. The MND Social Withdrawal Scale was administered to 298 patients with a diagnosis of MND, alongside the Hospital Anxiety and Depression Scale. The factor structure of the MND Social Withdrawal Scale was assessed using confirmatory factor analysis. Model fit, category threshold analysis, differential item functioning (DIF), dimensionality and local dependency were evaluated. Factor analysis confirmed the suitability of the four-factor solution suggested by the original authors. Mokken scale analysis suggested the removal of item five. Rasch analysis removed a further three items; from the Community (one item) and Emotional (two items) withdrawal subscales. Following item reduction, each scale exhibited excellent fit to the Rasch model. A 14-item Summary scale was shown to fit the Rasch model after subtesting the items into three subtests corresponding to the Community, Family and Emotional subscales, indicating that items from these three subscales could be summed together to create a total measure for social withdrawal. Removal of four items from the Social Withdrawal Scale led to a four factor solution with a 14-item hierarchical Summary scale that were all unidimensional, free for DIF and well fitted to the Rasch model. The scale is reliable and allows clinicians and researchers to measure social withdrawal in MND along a unidimensional construct. © 2013. Published by Elsevier B.V. All rights reserved.
Acquisition and production of skilled behavior in dynamic decision-making tasks
NASA Technical Reports Server (NTRS)
Kirlik, Alex
1992-01-01
Detailed summaries of two NASA-funded research projects are provided. The first project was an ecological task analysis of the Star Cruiser model. Star Cruiser is a psychological model designed to test a subject's level of cognitive activity. Ecological task analysis is used as a framework to predict the types of cognitive activity required to achieve productive behavior and to suggest how interfaces can be manipulated to alleviate certain types of cognitive demands. The second project is presented in the form of a thesis for the Masters Degree. The thesis discusses the modeling of decision-making through the use of neural network and genetic-algorithm machine learning technologies.
Sage Simulation Model for Technology Demonstration Convertor by a Step-by-Step Approach
NASA Technical Reports Server (NTRS)
Demko, Rikako; Penswick, L. Barry
2006-01-01
The development of a Stirling model using the 1-D Saga design code was completed using a step-by-step approach. This is a method of gradually increasing the complexity of the Saga model while observing the energy balance and energy losses at each step of the development. This step-by-step model development and energy-flow analysis can clarify where the losses occur, their impact, and suggest possible opportunities for design improvement.
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2014-03-01
Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.
Multivariate Longitudinal Analysis with Bivariate Correlation Test.
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.
Kinetic analysis of polyoxometalate (POM) oxidation of non-phenolic lignin model compound
Tomoya Yokoyama; Hou-min Chang; Ira A. Weinstock; Richard S. Reiner; John F. Kadla
2003-01-01
Kinetic and reaction mechanism of non-phenolic lignin model compounds under anaerobic polyoxometalate (POM), Na5(+1.9)[SiV1(-0.1)MoW10(+0.1) 40], bleaching conditions were examined. Analyses using a syringyl type model, 1-(3,4,5-trimethoxyphenyl)ethanol (1), a guaiacyl type, 1-(3,4- imethoxyphenyl)ethanol (2), and 1- (4-ethoxy-3,5-dimethoxyphenyl)ethanol (3) suggest...
A coupled ice-ocean model of upwelling in the marginal ice zone
NASA Technical Reports Server (NTRS)
Roed, L. P.; Obrien, J. J.
1983-01-01
A dynamical coupled ice-ocean numerical model for the marginal ice zone (MIZ) is suggested and used to study upwelling dynamics in the MIZ. The nonlinear sea ice model has a variable ice concentration and includes internal ice stress. The model is forced by stresses on the air/ocean and air/ice surfaces. The main coupling between the ice and the ocean is in the form of an interfacial stress on the ice/ocean interface. The ocean model is a linear reduced gravity model. The wind stress exerted by the atmosphere on the ocean is proportional to the fraction of open water, while the interfacial stress ice/ocean is proportional to the concentration of ice. A new mechanism for ice edge upwelling is suggested based on a geostrophic equilibrium solution for the sea ice medium. The upwelling reported in previous models invoking a stationary ice cover is shown to be replaced by a weak downwelling due to the ice motion. Most of the upwelling dynamics can be understood by analysis of the divergence of the across ice edge upper ocean transport. On the basis of numerical model, an analytical model is suggested that reproduces most of the upwelling dynamics of the more complex numerical model.
Garavito, A.M.; Kooi, H.; Neuzil, C.E.
2006-01-01
We have numerically modeled evolving fluid pressures and concentrations from a nine-year in situ osmosis experiment in the Pierre Shale, South Dakota. These data were obtained and recently interpreted by one of us (C.E.N.) as indicating a potentially significant role for chemical osmosis in media like the Pierre Shale. That analysis considered only the final pressure differentials among boreholes that were assumed to represent osmotic equilibrium. For this study, the system evolution was modeled using a recently developed transient model for membrane transport. The model simulates hydraulically and chemically driven fluid and solute transport. The results yield an estimate of the thickness of the water film between the clay platelets b of 40 A??, which corresponds to an osmotic efficiency ?? of 0.21 for the ambient pore water salinity of 3.5 g/l TDS. These values largely confirm the results of the earlier equilibrium analysis. However, the new model analysis provides additional constraints suggesting that intrinsic permeability k = 1.4 ?? 10-19 m2, specific storage Ss = 1.7 ?? 10-5 m-1, and diffusion coefficient D* = 6 ?? 10-11 m2/s. The k value is larger than certain independent estimates which range from 10-21 to 10-20; it may indicate opening of microcracks during the experiments. The fact that the complex transient pressure and concentration behavior for the individual wells could be reproduced quite accurately, and the inferred parameter values appear to be realistic for the Pierre Shale, suggests that the new model is a useful tool for modeling transient coupled flows in groundwater systems. ?? 2005 Elsevier Ltd. All rights reserved.
Modeled forest inventory data suggest climate benefits from fuels management
Jeremy S. Fried; Theresa B. Jain; Jonathan. Sandquist
2013-01-01
As part of a recent synthesis addressing fuel management in dry, mixed-conifer forests we analyzed more than 5,000 Forest Inventory and Analysis (FIA) plots, a probability sample that represents 33 million acres of these forests throughout Washington, Oregon, Idaho, Montana, Utah, and extreme northern California. We relied on the BioSum analysis framework that...
Users' Perceptions of the Web As Revealed by Transaction Log Analysis.
ERIC Educational Resources Information Center
Moukdad, Haidar; Large, Andrew
2001-01-01
Describes the results of a transaction log analysis of a Web search engine, WebCrawler, to analyze user's queries for information retrieval. Results suggest most users do not employ advanced search features, and the linguistic structure often resembles a human-human communication model that is not always successful in human-computer communication.…
A Nested Analysis for Data Collected from Groups: Making Crowding Research More Efficient
ERIC Educational Resources Information Center
Schiffenbauer, Allen; And Others
1978-01-01
This paper examines the difference between dependent and independent responses and suggest that, for the case of independence, a nested analysis of variance model is appropriate. The advantages of this analytic approach are explained, and conditions are discussed under which more powerful test of treatment effects may be obtained. (Author/MA)
Computational modeling of the EGFR network elucidates control mechanisms regulating signal dynamics
2009-01-01
Background The epidermal growth factor receptor (EGFR) signaling pathway plays a key role in regulation of cellular growth and development. While highly studied, it is still not fully understood how the signal is orchestrated. One of the reasons for the complexity of this pathway is the extensive network of inter-connected components involved in the signaling. In the aim of identifying critical mechanisms controlling signal transduction we have performed extensive analysis of an executable model of the EGFR pathway using the stochastic pi-calculus as a modeling language. Results Our analysis, done through simulation of various perturbations, suggests that the EGFR pathway contains regions of functional redundancy in the upstream parts; in the event of low EGF stimulus or partial system failure, this redundancy helps to maintain functional robustness. Downstream parts, like the parts controlling Ras and ERK, have fewer redundancies, and more than 50% inhibition of specific reactions in those parts greatly attenuates signal response. In addition, we suggest an abstract model that captures the main control mechanisms in the pathway. Simulation of this abstract model suggests that without redundancies in the upstream modules, signal transduction through the entire pathway could be attenuated. In terms of specific control mechanisms, we have identified positive feedback loops whose role is to prolong the active state of key components (e.g., MEK-PP, Ras-GTP), and negative feedback loops that help promote signal adaptation and stabilization. Conclusions The insights gained from simulating this executable model facilitate the formulation of specific hypotheses regarding the control mechanisms of the EGFR signaling, and further substantiate the benefit to construct abstract executable models of large complex biological networks. PMID:20028552
Biomechanical stability analysis of the lambda-model controlling one joint.
Lan, L; Zhu, K Y
2007-06-01
Computer modeling and control of the human motor system might be helpful for understanding the mechanism of human motor system and for the diagnosis and treatment of neuromuscular disorders. In this paper, a brief view of the equilibrium point hypothesis for human motor system modeling is given, and the lambda-model derived from this hypothesis is studied. The stability of the lambda-model based on equilibrium and Jacobian matrix is investigated. The results obtained in this paper suggest that the lambda-model is stable and has a unique equilibrium point under certain conditions.
Modeling Perceptual Decision Processes
2014-09-17
Ratcliff, & Wagenmakers, in press). Previous research suggests that playing action video games improves performance on sensory, perceptual, and...estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster...third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the
Future market scenarios for pulpwood supply from agricultural short-rotation woody crops
Alexander N. Moiseyev; Daniel G. de la Torre Ugarte; Peter J. Ince
2000-01-01
The North American Pulp And Paper (NAPAP) model and USDA POLYSYS agricultural policy analysis model were linked to project future market scenarios for pulpwood supply from agricultural short-rotation woody crops in the United States. Results suggest that pulpwood supply from fast- growing hybrid poplars and cottonwoods will become marginally economical but fairly...
ERIC Educational Resources Information Center
Sternod, Brandon M.
2011-01-01
In this article, the author examines popular written news media discourse from the United States concerning the "boy crisis," the gender gap, and male teachers as role models employing genealogical methodologies and theoretical concepts suggested by Foucault (1984, 1990, 1995). It is argued that such discourses reveal how "common…
Orbital evolution of space debris due to aerodynamic forces
NASA Astrophysics Data System (ADS)
Crowther, R.
1993-08-01
The concepts used in the AUDIT (Assessment Using Debris Impact Theory) debris modelling suite are introduced. A sensitivity analysis is carried out to determine the dominant parameters in the modelling process. A test case simulating the explosion of a satellite suggest that at the parent altitude there is a greater probability of collision with more massive fragments.
Tuition at PhD-Granting Institutions: A Supply and Demand Model.
ERIC Educational Resources Information Center
Koshal, Rajindar K.; And Others
1994-01-01
Builds and estimates a model that explains educational supply and demand behavior at PhD-granting institutions in the United States. The statistical analysis based on 1988-89 data suggests that student quantity, educational costs, average SAT score, class size, percentage of faculty with a PhD, graduation rate, ranking, and existence of a medical…
The Dubious Benefits of Multi-Level Modeling
ERIC Educational Resources Information Center
Gorard, Stephen
2007-01-01
This paper presents an argument against the wider adoption of complex forms of data analysis, using multi-level modeling (MLM) as an extended case study. MLM was devised to overcome some deficiencies in existing datasets, such as the bias caused by clustering. The paper suggests that MLM has an unclear theoretical and empirical basis, has not led…
1983-06-16
has been advocated by Gnanadesikan and ilk (1969), and others in the literature. This suggests that, if we use the formal signficance test type...American Statistical Asso., 62, 1159-1178. Gnanadesikan , R., and Wilk, M..B. (1969). Data Analytic Methods in Multi- variate Statistical Analysis. In
ERIC Educational Resources Information Center
Museus, Samuel D.; Vue, Rican
2013-01-01
The purpose of this study is to examine socioeconomic differences in the interpersonal factors that influence college access among Asian Americans and Pacific Islanders (AAPIs). Data on 1,460 AAPIs from the Education Longitudinal Study (ELS: 02/06) were analyzed using structural equation modeling techniques. Findings suggest that parental…
ERIC Educational Resources Information Center
Al-Alwan, Ahmed F.
2014-01-01
The author proposed a model to explain how parental involvement and school engagement related to academic performance. Participants were (671) 9th and 10th graders students who completed two scales of "parental involvement" and "school engagement" in their regular classrooms. Results of the path analysis suggested that the…
ERIC Educational Resources Information Center
Xia, Xinrong
2010-01-01
Based on the analysis of the questionnaire survey on learning motivation and learning needs of postgraduates and their demands and suggestions on English teaching, the paper makes a beneficial exploration on English course model for postgraduates in agricultural universities. Under the guidance of academic game theory, the "language skills+…
MetaDP: a comprehensive web server for disease prediction of 16S rRNA metagenomic datasets.
Xu, Xilin; Wu, Aiping; Zhang, Xinlei; Su, Mingming; Jiang, Taijiao; Yuan, Zhe-Ming
2016-01-01
High-throughput sequencing-based metagenomics has garnered considerable interest in recent years. Numerous methods and tools have been developed for the analysis of metagenomic data. However, it is still a daunting task to install a large number of tools and complete a complicated analysis, especially for researchers with minimal bioinformatics backgrounds. To address this problem, we constructed an automated software named MetaDP for 16S rRNA sequencing data analysis, including data quality control, operational taxonomic unit clustering, diversity analysis, and disease risk prediction modeling. Furthermore, a support vector machine-based prediction model for intestinal bowel syndrome (IBS) was built by applying MetaDP to microbial 16S sequencing data from 108 children. The success of the IBS prediction model suggests that the platform may also be applied to other diseases related to gut microbes, such as obesity, metabolic syndrome, or intestinal cancer, among others (http://metadp.cn:7001/).
NASA Astrophysics Data System (ADS)
Nevitt, Johanna M.; Warren, Jessica M.; Kidder, Steven; Pollard, David D.
2017-03-01
Granitic plutons commonly preserve evidence for jointing, faulting, and ductile fabric development during cooling. Constraining the spatial variation and temporal evolution of temperature during this deformation could facilitate an integrated analysis of heterogeneous deformation over multiple length-scales through time. Here, we constrain the evolving temperature of the Lake Edison granodiorite within the Mount Abbot Quadrangle (central Sierra Nevada, CA) during late Cretaceous deformation by combining microstructural analysis, titanium-in-quartz thermobarometry (TitaniQ), and thermal modeling. Microstructural and TitaniQ analyses were applied to 12 samples collected throughout the pluton, representative of either the penetrative "regional" fabric or the locally strong "fault-related" fabric. Overprinting textures and mineral assemblages indicate the temperature decreased from 400-500°C to <350°C during faulting. TitaniQ reveals consistently lower Ti concentrations for partially reset fault-related fabrics (average: 12 ± 4 ppm) than for regional fabrics (average: 31 ± 12 ppm), suggesting fault-related fabrics developed later, following a period of pluton cooling. Uncertainties, particularly in TiO2 activity, significantly limit further quantitative thermal estimates using TitaniQ. In addition, we present a 1-D heat conduction model that suggests average pluton temperature decreased from 585°C at 85 Ma to 332°C at 79 Ma, consistent with radiometric age data for the field. Integrated with the model results, microstructural temperature constraints suggest faulting initiated by ˜83 Ma, when the temperature was nearly uniform across the pluton. Thus, spatially heterogeneous deformation cannot be attributed to a persistent temperature gradient, but may be related to regional structures that develop in cooling plutons.
Groundwater model of the Blue River basin, Nebraska-Twenty years later
Alley, W.M.; Emery, P.A.
1986-01-01
Groundwater flow models have become almost a routine tool of the practicing hydrologist. Yet, surprisingly little attention has been given to true verification analysis of studies using these models. This paper examines predictions for 1982 of water-level declines and streamflow depletions that were made in 1965 using an electric analog groundwater model of the Blue River basin in southeastern Nebraska. Analysis of the model's predictions suggests that the analog model used too low an estimate of net groundwater withdrawals, yet overestimated water-level declines. The model predicted that almost all of the net groundwater pumpage would come from storage in the Pleistocene aquifer within the Blue River basin. It appears likely that the model underestimated the contributions of other sources of water to the pumpage, and that the aquifer storage coefficients used in the model were too low. There is some evidence that groundwater pumpage has had a greater than predicted effect on streamflow. Considerable uncertainty about the basic conceptualization of the hydrology of the Blue River basin greatly limits the reliability of groundwater models developed for the basin. The paper concludes with general perspectives on groundwater modeling gained from this post-audit analysis. ?? 1986.
New upper mantle model for North America: no longer a pyrolite composition?
NASA Astrophysics Data System (ADS)
Perchuc, E.; Malinowski, M.
2009-04-01
We compare the traveltimes data for P and S waves from the long range seismic profiles and from the earthquakes recorded to the offset of 3000 km with theoretical traveltimes predicted by standard seismological models: PREM, IASP- 91, AK-135 and especially by seismo-petrological model PREF (Cammarano and Romanowicz - 2007). For our analysis we are used data from north American array also. Our analysis suggests that for several events in the distance range 2000-3000 km, the first-arrivals are characterized by a relatively high velocity of 8.7-8.9 km/s. It is about 2.5% higher than P-wave velocity of the Lehmann phases, observed in the nearest offset and about 3% smaller than velocity below 410 km discontinuity. S waves model suggested significant differences in Vp/Vs ratio. We suggest that this is a new first-order seismological boundary which can be interpreted as a top of the mantle transition zone. Seismological arguments for the existence of such a boundary are as follows: refracted waves with velocity 8.7-8.9 km/s and reflected waves find by Warren at al. (1967) and by Thybo and Perchuc (1997b). Several new publications suggested existence of a low velocity zone above the 410-km discontinuity. We also see this feature in our studies. Important suggestion is existence of 300 km discontinuity below cold areas and it is also difficult to exclude this boundary below "cold" areas however phases from this boundary are in secondary impulses. Depth of this boundary strongly depends on the thermal state of the mantle in particular regions. In conclusion we can say that the mantle transition zone starts much earlier and the lower part of the upper mantle is much faster than predicted by purely pyrolitic mantle model. Several petrological studies suggest influences of fluids (especially H2O) on the character of the 410 km discontinuity and of the transition zone. All the differences in experimental data can be explained by the effect of temperature on the phase transformations within the olivine-wadsleyite system.
Evaluation model of distribution network development based on ANP and grey correlation analysis
NASA Astrophysics Data System (ADS)
Ma, Kaiqiang; Zhan, Zhihong; Zhou, Ming; Wu, Qiang; Yan, Jun; Chen, Genyong
2018-06-01
The existing distribution network evaluation system cannot scientifically and comprehensively reflect the distribution network development status. Furthermore, the evaluation model is monotonous and it is not suitable for horizontal analysis of many regional power grids. For these reason, this paper constructs a set of universal adaptability evaluation index system and model of distribution network development. Firstly, distribution network evaluation system is set up by power supply capability, power grid structure, technical equipment, intelligent level, efficiency of the power grid and development benefit of power grid. Then the comprehensive weight of indices is calculated by combining the AHP with the grey correlation analysis. Finally, the index scoring function can be obtained by fitting the index evaluation criterion to the curve, and then using the multiply plus operator to get the result of sample evaluation. The example analysis shows that the model can reflect the development of distribution network and find out the advantages and disadvantages of distribution network development. Besides, the model provides suggestions for the development and construction of distribution network.
Signal Processing in Periodically Forced Gradient Frequency Neural Networks
Kim, Ji Chul; Large, Edward W.
2015-01-01
Oscillatory instability at the Hopf bifurcation is a dynamical phenomenon that has been suggested to characterize active non-linear processes observed in the auditory system. Networks of oscillators poised near Hopf bifurcation points and tuned to tonotopically distributed frequencies have been used as models of auditory processing at various levels, but systematic investigation of the dynamical properties of such oscillatory networks is still lacking. Here we provide a dynamical systems analysis of a canonical model for gradient frequency neural networks driven by a periodic signal. We use linear stability analysis to identify various driven behaviors of canonical oscillators for all possible ranges of model and forcing parameters. The analysis shows that canonical oscillators exhibit qualitatively different sets of driven states and transitions for different regimes of model parameters. We classify the parameter regimes into four main categories based on their distinct signal processing capabilities. This analysis will lead to deeper understanding of the diverse behaviors of neural systems under periodic forcing and can inform the design of oscillatory network models of auditory signal processing. PMID:26733858
Analysis of the Temperature and Strain-Rate Dependences of Strain Hardening
NASA Astrophysics Data System (ADS)
Kreyca, Johannes; Kozeschnik, Ernst
2018-01-01
A classical constitutive modeling-based Ansatz for the impact of thermal activation on the stress-strain response of metallic materials is compared with the state parameter-based Kocks-Mecking model. The predicted functional dependencies suggest that, in the first approach, only the dislocation storage mechanism is a thermally activated process, whereas, in the second approach, only the mechanism of dynamic recovery is. In contradiction to each of these individual approaches, our analysis and comparison with experimental evidence shows that thermal activation contributes both to dislocation generation and annihilation.
An ideal clamping analysis for a cross-ply laminate
NASA Technical Reports Server (NTRS)
Valisetty, R. R.; Murthy, P. L. N.; Rehfield, L. W.
1988-01-01
Different elementary clamping models are discussed for a three layer crossply laminate to study the sensitivity of clamping to the definition of cross-sectional rotation. All of these models leave a considerable residual warping at the edges. Using a complimentary energy principle and principle of superposition, an analysis is conducted to reduce this residual warping. This led to the identification of exact interior solution corresponding to the ideal clamping. This study also suggests a presence of stress singularities at the corners and between different layers near the fixed edge.
Observation uncertainty in reversible Markov chains.
Metzner, Philipp; Weber, Marcus; Schütte, Christof
2010-09-01
In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .
Nano- and micro-electromechanical switch dynamics
NASA Astrophysics Data System (ADS)
Pulskamp, Jeffrey S.; Proie, Robert M.; Polcawich, Ronald G.
2013-01-01
This paper reports theoretical analysis and experimental results on the dynamics of piezoelectric MEMS mechanical logic relays. The multiple degree of freedom analytical model, based on modal decomposition, utilizes modal parameters obtained from finite element analysis and an analytical model of piezoelectric actuation. The model accounts for exact device geometry, damping, drive waveform variables, and high electric field piezoelectric nonlinearity. The piezoelectrically excited modal force is calculated directly and provides insight into design optimization for switching speed. The model accurately predicts the propagation delay dependence on actuation voltage of mechanically distinct relay designs. The model explains the observed discrepancies in switching speed of these devices relative to single degree of freedom switching speed models and suggests the strong potential for improved switching speed performance in relays designed for mechanical logic and RF circuits through the exploitation of higher order vibrational modes.
Theoretical and experimental study of polycyclic aromatic compounds as β-tubulin inhibitors.
Olazarán, Fabian E; García-Pérez, Carlos A; Bandyopadhyay, Debasish; Balderas-Rentería, Isaias; Reyes-Figueroa, Angel D; Henschke, Lars; Rivera, Gildardo
2017-03-01
In this work, through a docking analysis of compounds from the ZINC chemical library on human β-tubulin using high performance computer cluster, we report new polycyclic aromatic compounds that bind with high energy on the colchicine binding site of β-tubulin, suggesting three new key amino acids. However, molecular dynamic analysis showed low stability in the interaction between ligand and receptor. Results were confirmed experimentally in in vitro and in vivo models that suggest that molecular dynamics simulation is the best option to find new potential β-tubulin inhibitors. Graphical abstract Bennett's acceptance ratio (BAR) method.
2011-01-01
Background Xenobiotics represent an environmental stress and as such are a source for antibiotics, including the isoquinoline (IQ) compound IQ-143. Here, we demonstrate the utility of complementary analysis of both host and pathogen datasets in assessing bacterial adaptation to IQ-143, a synthetic analog of the novel type N,C-coupled naphthyl-isoquinoline alkaloid ancisheynine. Results Metabolite measurements, gene expression data and functional assays were combined with metabolic modeling to assess the effects of IQ-143 on Staphylococcus aureus, Staphylococcus epidermidis and human cell lines, as a potential paradigm for novel antibiotics. Genome annotation and PCR validation identified novel enzymes in the primary metabolism of staphylococci. Gene expression response analysis and metabolic modeling demonstrated the adaptation of enzymes to IQ-143, including those not affected by significant gene expression changes. At lower concentrations, IQ-143 was bacteriostatic, and at higher concentrations bactericidal, while the analysis suggested that the mode of action was a direct interference in nucleotide and energy metabolism. Experiments in human cell lines supported the conclusions from pathway modeling and found that IQ-143 had low cytotoxicity. Conclusions The data suggest that IQ-143 is a promising lead compound for antibiotic therapy against staphylococci. The combination of gene expression and metabolite analyses with in silico modeling of metabolite pathways allowed us to study metabolic adaptations in detail and can be used for the evaluation of metabolic effects of other xenobiotics. PMID:21418624
The high-energy-density counterpropagating shear experiment and turbulent self-heating
Doss, F. W.; Fincke, J. R.; Loomis, E. N.; ...
2013-12-06
The counterpropagating shear experiment has previously demonstrated the ability to create regions of shockdriven shear, balanced symmetrically in pressure and experiencing minimal net drift. This allows for the creation of a high-Mach-number high-energy-density shear environment. New data from the counterpropagating shear campaign is presented, and both hydrocode modeling and theoretical analysis in the context of a Reynolds-averaged-Navier-Stokes model suggest turbulent dissipation of energy from the supersonic flow bounding the layer is a significant driver in its expansion. A theoretical minimum shear flow Mach number threshold is suggested for substantial thermal-turbulence coupling.
Séror, Ann C
2002-12-01
The Internet and emergent telecommunications infrastructures are transforming the future of health care management. The costs of health care delivery systems, products, and services continue to rise everywhere, but performance of health care delivery is associated with institutional and ideological considerations as well as availability of financial and technological resources. to identify the effects of ideological differences on health care market infrastructures including the Internet and telecommunications technologies by a comparative case analysis of two large health care organizations: the British National Health Service and the California-based Kaiser Permanente health maintenance organization. A qualitative comparative analysis focusing on the British National Health Service and the Kaiser Permanente health maintenance organization to show how system infrastructures vary according to market dynamics dominated by health care institutions ("push") or by consumer demand ("pull"). System control mechanisms may be technologically embedded, institutional, or behavioral. The analysis suggests that telecommunications technologies and the Internet may contribute significantly to health care system performance in a context of ideological diversity. The study offers evidence to validate alternative models of health care governance: the national constitution model, and the enterprise business contract model. This evidence also suggests important questions for health care policy makers as well as researchers in telecommunications, organizational theory, and health care management.
Conceição, Cristiano Sena da; Neto, Mansueto Gomes; Neto, Anolino Costa; Mendes, Selena M D; Baptista, Abrahão Fontes; Sá, Kátia Nunes
2016-01-01
To tested the reliability and validity of Aofas in a sample of rheumatoid arthritis patients. The scale was applicable to rheumatoid arthritis patients, twice by the interviewer 1 and once by the interviewer 2. The Aofas was subjected to test-retest reliability analysis (with 20 Rheumatoid arthritis subjects). The psychometric properties were investigated using Rasch analysis on 33 Rheumatoid arthritis patients. Intra-Class Correlation Coefficient (ICC) were (0.90
2002-01-01
Background The Internet and emergent telecommunications infrastructures are transforming the future of health care management. The costs of health care delivery systems, products, and services continue to rise everywhere, but performance of health care delivery is associated with institutional and ideological considerations as well as availability of financial and technological resources. Objective To identify the effects of ideological differences on health care market infrastructures including the Internet and telecommunications technologies by a comparative case analysis of two large health care organizations: the British National Health Service and the California-based Kaiser Permanente health maintenance organization. Methods A qualitative comparative analysis focusing on the British National Health Service and the Kaiser Permanente health maintenance organization to show how system infrastructures vary according to market dynamics dominated by health care institutions ("push") or by consumer demand ("pull"). System control mechanisms may be technologically embedded, institutional, or behavioral. Results The analysis suggests that telecommunications technologies and the Internet may contribute significantly to health care system performance in a context of ideological diversity. Conclusions The study offers evidence to validate alternative models of health care governance: the national constitution model, and the enterprise business contract model. This evidence also suggests important questions for health care policy makers as well as researchers in telecommunications, organizational theory, and health care management. PMID:12554552
Tobin, David L; Banker, Judith D; Weisberg, Laura; Bowers, Wayne
2007-12-01
Although several studies have shown that eating disorders clinicians do not generally use treatment manuals, findings regarding what they do use have typically been vague, or closely linked to a particular theoretical approach. Our goal was to identify what eating disorder clinicians do with their patients in a more theoretically neutral context. We also sought to describe an empirically defined approach to psychotherapeutic practice as defined by clinicians via factor analysis. A survey developed for this study was administered to 265 clinicians recruited online and at regional and international meetings for eating disorders professionals. Only 6% of respondents reported they adhered closely to treatment manuals and 98% of the respondents indicated they used both behavioral and dynamically informed interventions. Factor analysis of clinicians' use of 32 therapeutic strategies suggested seven dimensions: Psychodynamic Interventions, Coping Skills Training, Family History, CBT, Contracts, Therapist Disclosure, and Patient Feelings. The findings of this study suggest that most clinicians use a wide array of eating disorder treatment interventions drawn from empirically supported treatments, such as CBT-BN, and from treatments that have no randomized controlled trial support. Factor analysis suggested theoretically linked dimensions of treatment, but also dimensions that are common across models. (c) 2007 by Wiley Periodicals, Inc.
New segregation analysis of panic disorder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vieland, V.J.; Fyer, A.J.; Chapman, T.
1996-04-09
We performed simple segregation analyses of panic disorder using 126 families of probands with DSM-III-R panic disorder who were ascertained for a family study of anxiety disorders at an anxiety disorders research clinic. We present parameter estimates for dominant, recessive, and arbitrary single major locus models without sex effects, as well as for a nongenetic transmission model, and compare these models to each other and to models obtained by other investigators. We rejected the nongenetic transmission model when comparing it to the recessive model. Consistent with some previous reports, we find comparable support for dominant and recessive models, and inmore » both cases estimate nonzero phenocopy rates. The effect of restricting the analysis to families of probands without any lifetime history of comorbid major depression (MDD) was also examined. No notable differences in parameter estimates were found in that subsample, although the power of that analysis was low. Consistency between the findings in our sample and in another independently collected sample suggests the possibility of pooling such samples in the future in order to achieve the necessary power for more complex analyses. 32 refs., 4 tabs.« less
Advanced Fault Diagnosis Methods in Molecular Networks
Habibi, Iman; Emamian, Effat S.; Abdi, Ali
2014-01-01
Analysis of the failure of cell signaling networks is an important topic in systems biology and has applications in target discovery and drug development. In this paper, some advanced methods for fault diagnosis in signaling networks are developed and then applied to a caspase network and an SHP2 network. The goal is to understand how, and to what extent, the dysfunction of molecules in a network contributes to the failure of the entire network. Network dysfunction (failure) is defined as failure to produce the expected outputs in response to the input signals. Vulnerability level of a molecule is defined as the probability of the network failure, when the molecule is dysfunctional. In this study, a method to calculate the vulnerability level of single molecules for different combinations of input signals is developed. Furthermore, a more complex yet biologically meaningful method for calculating the multi-fault vulnerability levels is suggested, in which two or more molecules are simultaneously dysfunctional. Finally, a method is developed for fault diagnosis of networks based on a ternary logic model, which considers three activity levels for a molecule instead of the previously published binary logic model, and provides equations for the vulnerabilities of molecules in a ternary framework. Multi-fault analysis shows that the pairs of molecules with high vulnerability typically include a highly vulnerable molecule identified by the single fault analysis. The ternary fault analysis for the caspase network shows that predictions obtained using the more complex ternary model are about the same as the predictions of the simpler binary approach. This study suggests that by increasing the number of activity levels the complexity of the model grows; however, the predictive power of the ternary model does not appear to be increased proportionally. PMID:25290670
Evaluation of the Edinburgh Post Natal Depression Scale using Rasch analysis
Pallant, Julie F; Miller, Renée L; Tennant, Alan
2006-01-01
Background The Edinburgh Postnatal Depression Scale (EPDS) is a 10 item self-rating post-natal depression scale which has seen widespread use in epidemiological and clinical studies. Concern has been raised over the validity of the EPDS as a single summed scale, with suggestions that it measures two separate aspects, one of depressive feelings, the other of anxiety. Methods As part of a larger cross-sectional study conducted in Melbourne, Australia, a community sample (324 women, ranging in age from 18 to 44 years: mean = 32 yrs, SD = 4.6), was obtained by inviting primiparous women to participate voluntarily in this study. Data from the EPDS were fitted to the Rasch measurement model and tested for appropriate category ordering, for item bias through Differential Item Functioning (DIF) analysis, and for unidimensionality through tests of the assumption of local independence. Results Rasch analysis of the data from the ten item scale initially demonstrated a lack of fit to the model with a significant Item-Trait Interaction total chi-square (chi Square = 82.8, df = 40; p < .001). Removal of two items (items 7 and 8) resulted in a non-significant Item-Trait Interaction total chi-square with a residual mean value for items of -0.467 with a standard deviation of 0.850, showing fit to the model. No DIF existed in the final 8-item scale (EPDS-8) and all items showed fit to model expectations. Principal Components Analysis of the residuals supported the local independence assumption, and unidimensionality of the revised EPDS-8 scale. Revised cut points were identified for EPDS-8 to maintain the case identification of the original scale. Conclusion The results of this study suggest that EPDS, in its original 10 item form, is not a viable scale for the unidimensional measurement of depression. Rasch analysis suggests that a revised eight item version (EPDS-8) would provide a more psychometrically robust scale. The revised cut points of 7/8 and 9/10 for the EPDS-8 show high levels of agreement with the original case identification for the EPDS-10. PMID:16768803
Nonlinear Poisson Equation for Heterogeneous Media
Hu, Langhua; Wei, Guo-Wei
2012-01-01
The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937
Immortal time bias in observational studies of time-to-event outcomes.
Jones, Mark; Fowler, Robert
2016-12-01
The purpose of the study is to show, through simulation and example, the magnitude and direction of immortal time bias when an inappropriate analysis is used. We compare 4 methods of analysis for observational studies of time-to-event outcomes: logistic regression, standard Cox model, landmark analysis, and time-dependent Cox model using an example data set of patients critically ill with influenza and a simulation study. For the example data set, logistic regression, standard Cox model, and landmark analysis all showed some evidence that treatment with oseltamivir provides protection from mortality in patients critically ill with influenza. However, when the time-dependent nature of treatment exposure is taken account of using a time-dependent Cox model, there is no longer evidence of a protective effect of treatment. The simulation study showed that, under various scenarios, the time-dependent Cox model consistently provides unbiased treatment effect estimates, whereas standard Cox model leads to bias in favor of treatment. Logistic regression and landmark analysis may also lead to bias. To minimize the risk of immortal time bias in observational studies of survival outcomes, we strongly suggest time-dependent exposures be included as time-dependent variables in hazard-based analyses. Copyright © 2016 Elsevier Inc. All rights reserved.
An operational global-scale ocean thermal analysis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, R. M.; Pollak, K.D.; Phoebus, P.A.
1990-04-01
The Optimum Thermal Interpolation System (OTIS) is an ocean thermal analysis system designed for operational use at FNOC. It is based on the optimum interpolation of the assimilation technique and functions in an analysis-prediction-analysis data assimilation cycle with the TOPS mixed-layer model. OTIS provides a rigorous framework for combining real-time data, climatology, and predictions from numerical ocean prediction models to produce a large-scale synoptic representation of ocean thermal structure. The techniques and assumptions used in OTIS are documented and results of operational tests of global scale OTIS at FNOC are presented. The tests involved comparisons of OTIS against an existingmore » operational ocean thermal structure model and were conducted during February, March, and April 1988. Qualitative comparison of the two products suggests that OTIS gives a more realistic representation of subsurface anomalies and horizontal gradients and that it also gives a more accurate analysis of the thermal structure, with improvements largest below the mixed layer. 37 refs.« less
Gurarie, David; King, Charles H.
2014-01-01
Mathematical modeling is widely used for predictive analysis of control options for infectious agents. Challenging problems arise for modeling host-parasite systems having complex life-cycles and transmission environments. Macroparasites, like Schistosoma, inhabit highly fragmented habitats that shape their reproductive success and distribution. Overdispersion and mating success are important factors to consider in modeling control options for such systems. Simpler models based on mean worm burden (MWB) formulations do not take these into account and overestimate transmission. Proposed MWB revisions have employed prescribed distributions and mating factor corrections to derive modified MWB models that have qualitatively different equilibria, including ‘breakpoints’ below which the parasite goes to extinction, suggesting the possibility of elimination via long-term mass-treatment control. Despite common use, no one has attempted to validate the scope and hypotheses underlying such MWB approaches. We conducted a systematic analysis of both the classical MWB and more recent “stratified worm burden” (SWB) modeling that accounts for mating and reproductive hurdles (Allee effect). Our analysis reveals some similarities, including breakpoints, between MWB and SWB, but also significant differences between the two types of model. We show the classic MWB has inherent inconsistencies, and propose SWB as a reliable alternative for projection of long-term control outcomes. PMID:25549362
Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer
NASA Astrophysics Data System (ADS)
Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad
2017-04-01
Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.
Statistical analysis of cascading failures in power grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael; Pfitzner, Rene; Turitsyn, Konstantin
2010-12-01
We introduce a new microscopic model of cascading failures in transmission power grids. This model accounts for automatic response of the grid to load fluctuations that take place on the scale of minutes, when optimum power flow adjustments and load shedding controls are unavailable. We describe extreme events, caused by load fluctuations, which cause cascading failures of loads, generators and lines. Our model is quasi-static in the causal, discrete time and sequential resolution of individual failures. The model, in its simplest realization based on the Directed Current description of the power flow problem, is tested on three standard IEEE systemsmore » consisting of 30, 39 and 118 buses. Our statistical analysis suggests a straightforward classification of cascading and islanding phases in terms of the ratios between average number of removed loads, generators and links. The analysis also demonstrates sensitivity to variations in line capacities. Future research challenges in modeling and control of cascading outages over real-world power networks are discussed.« less
Mathematical supply-chain modelling: Product analysis of cost and time
NASA Astrophysics Data System (ADS)
Easters, D. J.
2014-03-01
Establishing a mathematical supply-chain model is a proposition that has received attention due to its inherent benefits of evolving global supply-chain efficiencies. This paper discusses the prevailing relationships found within apparel supply-chain environments, and contemplates the complex issues indicated for constituting a mathematical model. Principal results identified within the data suggest, that the multifarious nature of global supply-chain activities require a degree of simplification in order to fully dilate the necessary factors which affect, each sub-section of the chain. Subsequently, the research findings allowed the division of supply-chain components into sub-sections, which amassed a coherent method of product development activity. Concurrently, the supply-chain model was found to allow systematic mathematical formulae analysis, of cost and time, within the multiple contexts of each subsection encountered. The paper indicates the supply-chain model structure, the mathematics, and considers how product analysis of cost and time can improve the comprehension of product lifecycle management.
Robertson, Colin; Sawford, Kate; Gunawardana, Walimunige S. N.; Nelson, Trisalyn A.; Nathoo, Farouk; Stephen, Craig
2011-01-01
Surveillance systems tracking health patterns in animals have potential for early warning of infectious disease in humans, yet there are many challenges that remain before this can be realized. Specifically, there remains the challenge of detecting early warning signals for diseases that are not known or are not part of routine surveillance for named diseases. This paper reports on the development of a hidden Markov model for analysis of frontline veterinary sentinel surveillance data from Sri Lanka. Field veterinarians collected data on syndromes and diagnoses using mobile phones. A model for submission patterns accounts for both sentinel-related and disease-related variability. Models for commonly reported cattle diagnoses were estimated separately. Region-specific weekly average prevalence was estimated for each diagnoses and partitioned into normal and abnormal periods. Visualization of state probabilities was used to indicate areas and times of unusual disease prevalence. The analysis suggests that hidden Markov modelling is a useful approach for surveillance datasets from novel populations and/or having little historical baselines. PMID:21949763
Model-Based Reasoning in Humans Becomes Automatic with Training.
Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J
2015-09-01
Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.
Gaze distribution analysis and saliency prediction across age groups.
Krishna, Onkar; Helo, Andrea; Rämä, Pia; Aizawa, Kiyoharu
2018-01-01
Knowledge of the human visual system helps to develop better computational models of visual attention. State-of-the-art models have been developed to mimic the visual attention system of young adults that, however, largely ignore the variations that occur with age. In this paper, we investigated how visual scene processing changes with age and we propose an age-adapted framework that helps to develop a computational model that can predict saliency across different age groups. Our analysis uncovers how the explorativeness of an observer varies with age, how well saliency maps of an age group agree with fixation points of observers from the same or different age groups, and how age influences the center bias tendency. We analyzed the eye movement behavior of 82 observers belonging to four age groups while they explored visual scenes. Explorative- ness was quantified in terms of the entropy of a saliency map, and area under the curve (AUC) metrics was used to quantify the agreement analysis and the center bias tendency. Analysis results were used to develop age adapted saliency models. Our results suggest that the proposed age-adapted saliency model outperforms existing saliency models in predicting the regions of interest across age groups.
Gun control and suicide: the impact of state firearm regulations in the United States, 1995-2004.
Rodríguez Andrés, Antonio; Hempstead, Katherine
2011-06-01
To empirically assess the impact of firearm regulation on male suicides. A negative binomial regression model was applied by using a panel of state level data for the years 1995-2004. The model was used to identify the association between several firearm regulations and male suicide rates. Our empirical analysis suggest that firearms regulations which function to reduce overall gun availability have a significant deterrent effect on male suicide, while regulations that seek to prohibit high risk individuals from owning firearms have a lesser effect. Restricting access to lethal means has been identified as an effective approach to suicide prevention, and firearms regulations are one way to reduce gun availability. The analysis suggests that gun control measures such as permit and licensing requirements have a negative effect on suicide rates among males. Since there is considerable heterogeneity among states with regard to gun control, these results suggest that there are opportunities for many states to reduce suicide by expanding their firearms regulations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Leitch, Judith
2014-01-01
Designed as a measure of perceptions of collaboration, the original psychometric testing of the Interdisciplinary Education Perception Scale (IEPS) indicated a four-factor solution to this measure, although subsequent research has suggested a three-factor solution may have better fit indices. This study aimed to better understand psychometric properties of the IEPS in a new population, health graduate students in the United States, to determine which sub-scale structure may be a better fit. Additionally this research explores the IEPS through a targeted literature review and content analysis in combination with factor analysis to better understand what constructs are able to be assessed by this measure. Results showed that the three-factor model was the best fitting model for the IEPS, suggesting this structure should be used when looking at graduate-level health students. Results also suggested that the IEPS may be able to be as a measure of perceived professional prestige, for which there is currently no existing measure. The dimension of professional prestige should be explored in further research to create a more robust understanding of its role in collaboration between professions.
[A competency model of rural general practitioners: theory construction and empirical study].
Yang, Xiu-Mu; Qi, Yu-Long; Shne, Zheng-Fu; Han, Bu-Xin; Meng, Bei
2015-04-01
To perform theory construction and empirical study of the competency model of rural general practitioners. Through literature study, job analysis, interviews, and expert team discussion, the questionnaire of rural general practitioners competency was constructed. A total of 1458 rural general practitioners were surveyed by the questionnaire in 6 central provinces. The common factors were constructed using the principal component method of exploratory factor analysis and confirmatory factor analysis. The influence of the competency characteristics on the working performance was analyzed using regression equation analysis. The Cronbach 's alpha coefficient of the questionnaire was 0.974. The model consisted of 9 dimensions and 59 items. The 9 competency dimensions included basic public health service ability, basic clinical skills, system analysis capability, information management capability, communication and cooperation ability, occupational moral ability, non-medical professional knowledge, personal traits and psychological adaptability. The rate of explained cumulative total variance was 76.855%. The model fitting index were Χ(2)/df 1.88, GFI=0.94, NFI=0.96, NNFI=0.98, PNFI=0.91, RMSEA=0.068, CFI=0.97, IFI=0.97, RFI=0.96, suggesting good model fitting. Regression analysis showed that the competency characteristics had a significant effect on job performance. The rural general practitioners competency model provides reference for rural doctor training, rural order directional cultivation of medical students, and competency performance management of the rural general practitioners.
Bai, Mei; Dixon, Jane K
2014-01-01
The purpose of this study was to reexamine the factor pattern of the 12-item Functional Assessment of Chronic Illness Therapy-Spiritual Well-Being Scale (FACIT-Sp-12) using exploratory factor analysis in people newly diagnosed with advanced cancer. Principal components analysis (PCA) and 3 common factor analysis methods were used to explore the factor pattern of the FACIT-Sp-12. Factorial validity was assessed in association with quality of life (QOL). Principal factor analysis (PFA), iterative PFA, and maximum likelihood suggested retrieving 3 factors: Peace, Meaning, and Faith. Both Peace and Meaning positively related to QOL, whereas only Peace uniquely contributed to QOL. This study supported the 3-factor model of the FACIT-Sp-12. Suggestions for revision of items and further validation of the identified factor pattern were provided.
The Virtual Brain: Modeling Biological Correlates of Recovery after Chronic Stroke
Falcon, Maria Inez; Riley, Jeffrey D.; Jirsa, Viktor; McIntosh, Anthony R.; Shereen, Ahmed D.; Chen, E. Elinor; Solodkin, Ana
2015-01-01
There currently remains considerable variability in stroke survivor recovery. To address this, developing individualized treatment has become an important goal in stroke treatment. As a first step, it is necessary to determine brain dynamics associated with stroke and recovery. While recent methods have made strides in this direction, we still lack physiological biomarkers. The Virtual Brain (TVB) is a novel application for modeling brain dynamics that simulates an individual’s brain activity by integrating their own neuroimaging data with local biophysical models. Here, we give a detailed description of the TVB modeling process and explore model parameters associated with stroke. In order to establish a parallel between this new type of modeling and those currently in use, in this work we establish an association between a specific TVB parameter (long-range coupling) that increases after stroke with metrics derived from graph analysis. We used TVB to simulate the individual BOLD signals for 20 patients with stroke and 10 healthy controls. We performed graph analysis on their structural connectivity matrices calculating degree centrality, betweenness centrality, and global efficiency. Linear regression analysis demonstrated that long-range coupling is negatively correlated with global efficiency (P = 0.038), but is not correlated with degree centrality or betweenness centrality. Our results suggest that the larger influence of local dynamics seen through the long-range coupling parameter is closely associated with a decreased efficiency of the system. We thus propose that the increase in the long-range parameter in TVB (indicating a bias toward local over global dynamics) is deleterious because it reduces communication as suggested by the decrease in efficiency. The new model platform TVB hence provides a novel perspective to understanding biophysical parameters responsible for global brain dynamics after stroke, allowing the design of focused therapeutic interventions. PMID:26579071
La Peyre, M.K.; Mendelssohn, I.A.; Reams, M.A.; Templet, P.H.; Grace, J.B.
2001-01-01
Integrated management and policy models suggest that solutions to environmental issues may be linked to the socioeconomic and political Characteristics of a nation. In this study, we empirically explore these suggestions by applying them to the wetland management activities of nations. Structural equation modeling was used to evaluate a model of national wetland management effort and one of national wetland protection. Using five predictor variables of social capital, economic capital, environmental and political characteristics, and land-use pressure, the multivariate models were able to explain 60% of the variation in nations' wetland protection efforts based on data from 90 nations, as defined by level of participation, in the international wetland convention. Social capital had the largest direct effect on wetland protection efforts, suggesting that increased social development may eventually lead to better wetland protection. In contrast, increasing economic development had a negative linear relationship with wetland protection efforts, suggesting the need for explicit wetland protection programs as nations continue to focus on economic development. Government, environmental characteristics, and land-use pressure also had a positive direct effect on wetland protection, and mediated the effect of social capital on wetland protection. Explicit wetland protection policies, combined with a focus on social development, would lead to better wetland protection at the national level.
NASA Astrophysics Data System (ADS)
Wang, Zi-han; Wang, Chun-mei; Tang, Hua-xin; Zuo, Cheng-ji; Xu, Hong-ming
2009-06-01
Ignition timing control is of great importance in homogeneous charge compression ignition engines. The effect of hydrogen addition on methane combustion was investigated using a CHEMKIN multi-zone model. Results show that hydrogen addition advances ignition timing and enhances peak pressure and temperature. A brief analysis of chemical kinetics of methane blending hydrogen is also performed in order to investigate the scope of its application, and the analysis suggests that OH radical plays an important role in the oxidation. Hydrogen addition increases NOx while decreasing HC and CO emissions. Exhaust gas recirculation (EGR) also advances ignition timing; however, its effects on emissions are generally the opposite. By adjusting the hydrogen addition and EGR rate, the ignition timing can be regulated with a low emission level. Investigation into zones suggests that NOx is mostly formed in core zones while HC and CO mostly originate in the crevice and the quench layer.
Modes and emergent time scales of embayed beach dynamics
NASA Astrophysics Data System (ADS)
Ratliff, Katherine M.; Murray, A. Brad
2014-10-01
In this study, we use a simple numerical model (the Coastline Evolution Model) to explore alongshore transport-driven shoreline dynamics within generalized embayed beaches (neglecting cross-shore effects). Using principal component analysis (PCA), we identify two primary orthogonal modes of shoreline behavior that describe shoreline variation about its unchanging mean position: the rotation mode, which has been previously identified and describes changes in the mean shoreline orientation, and a newly identified breathing mode, which represents changes in shoreline curvature. Wavelet analysis of the PCA mode time series reveals characteristic time scales of these modes (typically years to decades) that emerge within even a statistically constant white-noise wave climate (without changes in external forcing), suggesting that these time scales can arise from internal system dynamics. The time scales of both modes increase linearly with shoreface depth, suggesting that the embayed beach sediment transport dynamics exhibit a diffusive scaling.
A nurse staffing analysis at the largest hospital in the Gulf region
NASA Astrophysics Data System (ADS)
Louly, M.; Gharbi, A.; Azaiez, M. N.; Bouras, A.
2014-12-01
The paper considers a staffing problem at a local hospital. The managers consider they are understaffed and try to overwhelm the staffing deficit problem through overtime, rather than hiring additional nurses. However, the huge amount of allocated budget for overtime becomes a concern and needs some assessment, analysis and justification. The current hospital estimates suggests that the shortage at the hospital level corresponds to 300 full time equivalent (FTE) nurses, but the deficit is not basedon deep scientific approach. This paper deals with staffing model that provides the required scientific evidence on the deficit level. It also gives the accurate information on the overtime components. As a results, the suggested staffing model shows that some nursing units are unnecessarily overstaffed. Moreover, the current study reveals that the real deficit is of only 215 FTE resulting in a potential saving of 28%.
The Rank Hypothesis and Lexical Decision: A Reply to Adelman and Brown (2008)
ERIC Educational Resources Information Center
Murray, Wayne S.; Forster, Kenneth I.
2008-01-01
J. S. Adelman and G. D. A. Brown (2008) provided an extensive analysis of the form of word frequency and contextual diversity effects on lexical decision time. In this reply, the current authors suggest that their analysis provides a valuable tool for the evaluation of models of lexical access and that the results they report are broadly…
Structural synthesis: Precursor and catalyst
NASA Technical Reports Server (NTRS)
Schmit, L. A.
1984-01-01
More than twenty five years have elapsed since it was recognized that a rather general class of structural design optimization tasks could be properly posed as an inequality constrained minimization problem. It is suggested that, independent of primary discipline area, it will be useful to think about: (1) posing design problems in terms of an objective function and inequality constraints; (2) generating design oriented approximate analysis methods (giving special attention to behavior sensitivity analysis); (3) distinguishing between decisions that lead to an analysis model and those that lead to a design model; (4) finding ways to generate a sequence of approximate design optimization problems that capture the essential characteristics of the primary problem, while still having an explicit algebraic form that is matched to one or more of the established optimization algorithms; (5) examining the potential of optimum design sensitivity analysis to facilitate quantitative trade-off studies as well as participation in multilevel design activities. It should be kept in mind that multilevel methods are inherently well suited to a parallel mode of operation in computer terms or to a division of labor between task groups in organizational terms. Based on structural experience with multilevel methods general guidelines are suggested.
A model of clutter for complex, multivariate geospatial displays.
Lohrenz, Maura C; Trafton, J Gregory; Beck, R Melissa; Gendron, Marlin L
2009-02-01
A novel model of measuring clutter in complex geospatial displays was compared with human ratings of subjective clutter as a measure of convergent validity. The new model is called the color-clustering clutter (C3) model. Clutter is a known problem in displays of complex data and has been shown to affect target search performance. Previous clutter models are discussed and compared with the C3 model. Two experiments were performed. In Experiment 1, participants performed subjective clutter ratings on six classes of information visualizations. Empirical results were used to set two free parameters in the model. In Experiment 2, participants performed subjective clutter ratings on aeronautical charts. Both experiments compared and correlated empirical data to model predictions. The first experiment resulted in a .76 correlation between ratings and C3. The second experiment resulted in a .86 correlation, significantly better than results from a model developed by Rosenholtz et al. Outliers to our correlation suggest further improvements to C3. We suggest that (a) the C3 model is a good predictor of subjective impressions of clutter in geospatial displays, (b) geospatial clutter is a function of color density and saliency (primary C3 components), and (c) pattern analysis techniques could further improve C3. The C3 model could be used to improve the design of electronic geospatial displays by suggesting when a display will be too cluttered for its intended audience.
Bao, Jie; Liu, Pan; Yu, Hao; Xu, Chengcheng
2017-09-01
The primary objective of this study was to investigate how to incorporate human activity information in spatial analysis of crashes in urban areas using Twitter check-in data. This study used the data collected from the City of Los Angeles in the United States to illustrate the procedure. The following five types of data were collected: crash data, human activity data, traditional traffic exposure variables, road network attributes and social-demographic data. A web crawler by Python was developed to collect the venue type information from the Twitter check-in data automatically. The human activities were classified into seven categories by the obtained venue types. The collected data were aggregated into 896 Traffic Analysis Zones (TAZ). Geographically weighted regression (GWR) models were developed to establish a relationship between the crash counts reported in a TAZ and various contributing factors. Comparative analyses were conducted to compare the performance of GWR models which considered traditional traffic exposure variables only, Twitter-based human activity variables only, and both traditional traffic exposure and Twitter-based human activity variables. The model specification results suggested that human activity variables significantly affected the crash counts in a TAZ. The results of comparative analyses suggested that the models which considered both traditional traffic exposure and human activity variables had the best goodness-of-fit in terms of the highest R 2 and lowest AICc values. The finding seems to confirm the benefits of incorporating human activity information in spatial analysis of crashes using Twitter check-in data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450
Falat, Lukas; Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.
ERIC Educational Resources Information Center
Dintzer, Leonard; Wortman, Camille B.
1978-01-01
The reformulated learned helplessness model of depression (Abramson, Seligman, Teasdale 1978) was examined. Argues that unless it is possible to specify the conditions under which a given attribution will be made, the model becomes circular and lacks predictive power. Discusses Abramson et al.'s suggestions for therapy and prevention. (Editor/RK)
Laurence Lin; J.R. Webster
2012-01-01
The constant nutrient addition technique has been used extensively to measure nutrient uptake in streams. However, this technique is impractical for large streams, and the pulse nutrient addition (PNA) has been suggested as an alternative. We developed a computer model to simulate Monod kinetics nutrient uptake in large rivers and used this model to evaluate the...
ERIC Educational Resources Information Center
Chatterji, Monojit; Seaman, Paul
2006-01-01
A considerable sum of money is allocated to UK universities on the basis of Research Assessment Exercise performance. In this paper we analyse the two main funding models used in the United Kingdom and discuss their strengths and weaknesses. We suggest that the benchmarking used by the two main models have significant weaknesses, and propose an…
Taking a systems approach to ecological systems
Grace, James B.
2015-01-01
Increasingly, there is interest in a systems-level understanding of ecological problems, which requires the evaluation of more complex, causal hypotheses. In this issue of the Journal of Vegetation Science, Soliveres et al. use structural equation modeling to test a causal network hypothesis about how tree canopies affect understorey communities. Historical analysis suggests structural equation modeling has been under-utilized in ecology.
ERIC Educational Resources Information Center
Hannah, David R.; Venkatachary, Ranga
2010-01-01
In this article, the authors present a retrospective analysis of an instructor's multiyear redesign of a course on organization theory into what is called a hybrid Classroom-as-Organization model. It is suggested that this new course design served to apprentice students to function in quasi-real organizational structures. The authors further argue…
A case for multi-model and multi-approach based event attribution: The 2015 European drought
NASA Astrophysics Data System (ADS)
Hauser, Mathias; Gudmundsson, Lukas; Orth, René; Jézéquel, Aglaé; Haustein, Karsten; Seneviratne, Sonia Isabelle
2017-04-01
Science on the role of anthropogenic influence on extreme weather events such as heat waves or droughts has evolved rapidly over the past years. The approach of "event attribution" compares the occurrence probability of an event in the present, factual world with the probability of the same event in a hypothetical, counterfactual world without human-induced climate change. Every such analysis necessarily faces multiple methodological choices including, but not limited to: the event definition, climate model configuration, and the design of the counterfactual world. Here, we explore the role of such choices for an attribution analysis of the 2015 European summer drought (Hauser et al., in preparation). While some GCMs suggest that anthropogenic forcing made the 2015 drought more likely, others suggest no impact, or even a decrease in the event probability. These results additionally differ for single GCMs, depending on the reference used for the counterfactual world. Observational results do not suggest a historical tendency towards more drying, but the record may be too short to provide robust assessments because of the large interannual variability of drought occurrence. These results highlight the need for a multi-model and multi-approach framework in event attribution research. This is especially important for events with low signal to noise ratio and high model dependency such as regional droughts. Hauser, M., L. Gudmundsson, R. Orth, A. Jézéquel, K. Haustein, S.I. Seneviratne, in preparation. A case for multi-model and multi-approach based event attribution: The 2015 European drought.
Meta-analysis suggests choosy females get sexy sons more than "good genes".
Prokop, Zofia M; Michalczyk, Łukasz; Drobniak, Szymon M; Herdegen, Magdalena; Radwan, Jacek
2012-09-01
Female preferences for specific male phenotypes have been documented across a wide range of animal taxa, including numerous species where males contribute only gametes to offspring production. Yet, selective pressures maintaining such preferences are among the major unknowns of evolutionary biology. Theoretical studies suggest that preferences can evolve if they confer genetic benefits in terms of increased attractiveness of sons ("Fisherian" models) or overall fitness of offspring ("good genes" models). These two types of models predict, respectively, that male attractiveness is heritable and genetically correlated with fitness. In this meta-analysis, we draw general conclusions from over two decades worth of empirical studies testing these predictions (90 studies on 55 species in total). We found evidence for heritability of male attractiveness. However, attractiveness showed no association with traits directly associated with fitness (life-history traits). Interestingly, it did show a positive correlation with physiological traits, which include immunocompetence and condition. In conclusion, our results support "Fisherian" models of preference evolution, while providing equivocal evidence for "good genes." We pinpoint research directions that should stimulate progress in our understanding of the evolution of female choice. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
Integrative Analysis of Desert Dust Size and Abundance Suggests Less Dust Climate Cooling
NASA Technical Reports Server (NTRS)
Kok, Jasper F.; Ridley, David A.; Zhou, Qing; Miller, Ron L.; Zhao, Chun; Heald, Colette L.; Ward, Daniel S.; Albani, Samuel; Haustein, Karsten
2017-01-01
Desert dust aerosols affect Earths global energy balance through interactions with radiation, clouds, and ecosystems. But the magnitudes of these effects are so uncertain that it remains unclear whether atmospheric dust has a net warming or cooling effect on global climate. Consequently, it is still uncertain whether large changes in atmospheric dust loading over the past century have slowed or accelerated anthropogenic climate change, and the climate impact of possible future alterations in dust loading is similarly disputed. Here we use an integrative analysis of dust aerosol sizes and abundance to constrain the climatic impact of dust through direct interactions with radiation. Using a combination of observational, experimental, and model data, we find that atmospheric dust is substantially coarser than represented in current climate models. Since coarse dust warms global climate, the dust direct radiative effect (DRE) is likely less cooling than the 0.4 W m superscript 2 estimated by models in a current ensemble. We constrain the dust DRE to -0.20 (-0.48 to +0.20) W m superscript 2, which suggests that the dust DRE produces only about half the cooling that current models estimate, and raises the possibility that dust DRE is actually net warming the planet.
Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.
Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H
2014-01-01
Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing.
A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision
NASA Technical Reports Server (NTRS)
Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.
1998-01-01
We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation problems and provide a measure of model performance which can be used in attempts to improve such models.
NASA Technical Reports Server (NTRS)
Herman, J. R.; Hudson, R. D.; Serafino, G.
1990-01-01
Arguments are presented showing that the basic empirical model of the solar backscatter UV (SBUV) instrument degradation used by Cebula et al. (1988) in their analysis of the SBUV data is likely to lead to an incorrect estimate of the ozone trend. A correction factor is given as a function of time and altitude that brings the SBUV data into approximate agreement with the SAGE, SME, and Dobson network ozone trends. It is suggested that the currently archived SBUV ozone data should be used with caution for periods of analysis exceeding 1 yr, since it is likely that the yearly decreases contained in the archived data are too large.
Flame trench analysis of NLS vehicles
NASA Technical Reports Server (NTRS)
Zeytinoglu, Nuri
1993-01-01
The present study takes the initial steps of establishing a better flame trench design criteria for future National Launch System vehicles. A three-dimensional finite element computer model for predicting the transient thermal and structural behavior of the flame trench walls was developed using both I-DEAS and MSC/NASTRAN software packages. The results of JANNAF Standardized Plume flowfield calculations of sea-level exhaust plume of the Space Shuttle Main Engine (SSME), Space Transportation Main Engine (STME), and Advanced Solid Rocket Motors (ASRM) were analyzed for different axial distances. The results of sample calculations, using the developed finite element model, are included. The further suggestions are also reported for enhancing the overall analysis of the flame trench model.
Re-Thinking the Use of the OML Model in Electric-Sail Development
NASA Technical Reports Server (NTRS)
Stone, Nobie H.
2016-01-01
The Orbit Motion Limited (OML) model commonly forms the basis for calculations made to determine the effect of the long, biased wires of an Electric Sail on solar wind protons and electrons (which determines the thrust generated and the required operating power). A new analysis of the results of previously conducted ground-based experimental studies of spacecraft-space plasma interactions indicate that the expected thrust created by deflected solar wind protons and the current of collected solar wind electrons could be considerably higher than the OML model would suggest. Herein the experimental analysis will be summarized and the assumptions and approximations required to derive the OML equation-and the limitations they impose-will be considered.
Neumann, Craig S.; Malterer, Melanie B.; Newman, Joseph P.
2010-01-01
Recent exploratory factor analysis (EFA) of the Psychopathic Personality Inventory (PPI; Lilienfeld, 1990) with a community sample suggested that the PPI subscales may be comprised of two higher-order factors (Benning et al., 2003). However, little research has examined the PPI structure in offenders. The current study attempted to replicate the Benning et al. two-factor solution using a large (N=1224) incarcerated male sample. Confirmatory factor analysis (CFA) of this model with the full sample resulted in poor model fit. Next, to identify a factor solution that would summarize the offender data, EFA was conducted using a split-half of the total sample, followed by an attempt to replicate the EFA solution via CFA with the other split-half sample. Using the recommendations of Prooijen and van der Kloot (2001) for recovering EFA solutions, model fit results provided some evidence that the EFA solution could be recovered via CFA. However, this model involved extensive cross-loadings of the subscales across three factors, suggesting item overlap across PPI subscales. In sum, the two-factor solution reported by Benning et al. (2003) was not a viable model for the current sample of offenders, and additional research is needed to elucidate the latent structure of the PPI. PMID:18557694
Modelling hard and soft states of Cygnus X-1 with propagating mass accretion rate fluctuations
NASA Astrophysics Data System (ADS)
Rapisarda, S.; Ingram, A.; van der Klis, M.
2017-12-01
We present a timing analysis of three Rossi X-ray Timing Explorer observations of the black hole binary Cygnus X-1 with the propagating mass accretion rate fluctuations model PROPFLUC. The model simultaneously predicts power spectra, time lags and coherence of the variability as a function of energy. The observations cover the soft and hard states of the source, and the transition between the two. We find good agreement between model predictions and data in the hard and soft states. Our analysis suggests that in the soft state the fluctuations propagate in an optically thin hot flow extending up to large radii above and below a stable optically thick disc. In the hard state, our results are consistent with a truncated disc geometry, where the hot flow extends radially inside the inner radius of the disc. In the transition from soft to hard state, the characteristics of the rapid variability are too complex to be successfully described with PROPFLUC. The surface density profile of the hot flow predicted by our model and the lack of quasi-periodic oscillations in the soft and hard states suggest that the spin of the black hole is aligned with the inner accretion disc and therefore probably with the rotational axis of the binary system.
Einstein's steady-state theory: an abandoned model of the cosmos
NASA Astrophysics Data System (ADS)
O'Raifeartaigh, Cormac; McCann, Brendan; Nahm, Werner; Mitton, Simon
2014-09-01
We present a translation and analysis of an unpublished manuscript by Albert Einstein in which he attempted to construct a `steady-state' model of the universe. The manuscript, which appears to have been written in early 1931, demonstrates that Einstein once explored a cosmic model in which the mean density of matter in an expanding universe is maintained constant by the continuous formation of matter from empty space. This model is very different to previously known Einsteinian models of the cosmos (both static and dynamic) but anticipates the later steady-state cosmology of Hoyle, Bondi and Gold in some ways. We find that Einstein's steady-state model contains a fundamental flaw and suggest that it was abandoned for this reason. We also suggest that he declined to explore a more sophisticated version because he found such theories rather contrived. The manuscript is of historical interest because it reveals that Einstein debated between steady-state and evolving models of the cosmos decades before a similar debate took place in the cosmological community.
NASA Astrophysics Data System (ADS)
Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang
2014-05-01
Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate our suggested approach with an application to model selection between different soil-plant models following up on a study by Wöhling et al. (2013). Results show that measurement noise compromises the reliability of model ranking and causes a significant amount of weighting uncertainty, if the calibration data time series is not long enough to compensate for its noisiness. This additional contribution to the overall predictive uncertainty is neglected without our approach. Thus, we strongly advertise to include our suggested upgrade in the Bayesian model averaging routine.
Fan, Lijuan; Fu, Guoning; Ding, Yuanyuan; Lv, Peng; Li, Hongyun
2017-03-01
Bactericidal/permeability increasing protein (BPI) gene polymorphisms have been extensively investigated in terms of their associations with inflammatory bowel disease (IBD), with contradictory results. The aim of this meta-analysis was to evaluate associations between BPI gene polymorphisms and the risk of IBD, Crohn's disease (CD), and ulcerative colitis (UC). Eligible studies from PubMed, Embase, and Cochrane library databases were identified. Ten studies (five CD and five UC) published in five papers were included in this meta-analysis. G645A polymorphism was associated with a decreased risk of UC in allele model, dominant model, and homozygous model. Our data suggested that BPI G645A polymorphism was associated with a decreased risk of UC; the BPI G645A polymorphism was not associated with the risk of CD.
Lander, Tonya A; Klein, Etienne K; Oddou-Muratorio, Sylvie; Candau, Jean-Noël; Gidoin, Cindy; Chalon, Alain; Roig, Anne; Fallour, Delphine; Auger-Rozenberg, Marie-Anne; Boivin, Thomas
2014-01-01
Understanding how invasive species establish and spread is vital for developing effective management strategies for invaded areas and identifying new areas where the risk of invasion is highest. We investigated the explanatory power of dispersal histories reconstructed based on local-scale wind data and a regional-scale wind-dispersed particle trajectory model for the invasive seed chalcid wasp Megastigmus schimitscheki (Hymenoptera: Torymidae) in France. The explanatory power was tested by: (1) survival analysis of empirical data on M. schimitscheki presence, absence and year of arrival at 52 stands of the wasp's obligate hosts, Cedrus (true cedar trees); and (2) Approximate Bayesian analysis of M. schimitscheki genetic data using a coalescence model. The Bayesian demographic modeling and traditional population genetic analysis suggested that initial invasion across the range was the result of long-distance dispersal from the longest established sites. The survival analyses of the windborne expansion patterns derived from a particle dispersal model indicated that there was an informative correlation between the M. schimitscheki presence/absence data from the annual surveys and the scenarios based on regional-scale wind data. These three very different analyses produced highly congruent results supporting our proposal that wind is the most probable vector for passive long-distance dispersal of this invasive seed wasp. This result confirms that long-distance dispersal from introduction areas is a likely driver of secondary expansion of alien invasive species. Based on our results, management programs for this and other windborne invasive species may consider (1) focusing effort at the longest established sites and (2) monitoring outlying populations remains critically important due to their influence on rates of spread. We also suggest that there is a distinct need for new analysis methods that have the capacity to combine empirical spatiotemporal field data, genetic data, and environmental data to investigate dispersal and invasion. PMID:25558356
Chen, Min; Tang, Wenjing; Hou, Lei; Liu, Ruozhuo; Dong, Zhao; Han, Xun; Zhang, Xiaofei; Wan, Dongjun; Yu, Shengyuan
2015-01-01
Background and Objective Conflicting data have been reported on the association between tumor necrosis factor (TNF) –308G>A and nitric oxide synthase 3 (NOS3) +894G>T polymorphisms and migraine. We performed a meta-analysis of case-control studies to evaluate whether the TNF –308G>A and NOS3 +894G>T polymorphisms confer genetic susceptibility to migraine. Method We performed an updated meta-analysis for TNF –308G>A and a meta-analysis for NOS3 +894G>T based on studies published up to July 2014. We calculated study specific odds ratios (OR) and 95% confidence intervals (95% CI) assuming allele contrast, dominant model, recessive model, and co-dominant model as pooled effect estimates. Results Eleven studies in 6682 migraineurs and 22591 controls for TNF –308G>A and six studies in 1055 migraineurs and 877 controls for NOS3 +894G>T were included in the analysis. Neither indicated overall associations between gene polymorphisms and migraine risk. Subgroup analyses suggested that the “A” allele of the TNF –308G>A variant increases the risk of migraine among non-Caucasians (dominant model: pooled OR = 1.82; 95% CI 1.15 – 2.87). The risk of migraine with aura (MA) was increased among both Caucasians and non-Caucasians. Subgroup analyses suggested that the “T” allele of the NOS3 +894G>T variant increases the risk of migraine among non-Caucasians (co-dominant model: pooled OR = 2.10; 95% CI 1.14 – 3.88). Conclusions Our findings appear to support the hypothesis that the TNF –308G>A polymorphism may act as a genetic susceptibility factor for migraine among non-Caucasians and that the NOS3 +894G>T polymorphism may modulate the risk of migraine among non-Caucasians. PMID:26098763
de Andrade, Luciano; Lynch, Catherine; Carvalho, Elias; Rodrigues, Clarissa Garcia; Vissoci, João Ricardo Nickenig; Passos, Guttenberg Ferreira; Pietrobon, Ricardo; Nihei, Oscar Kenji; de Barros Carvalho, Maria Dalva
2014-01-01
Mortality rates amongst ST segment elevation myocardial infarction (STEMI) patients remain high, especially in developing countries. The aim of this study was to evaluate the factors related with delays in the treatment of STEMI patients to support a strategic plan toward structural and personnel modifications in a primary hospital aligning its process with international guidelines. The study was conducted in a primary hospital localized in Foz do Iguaçu, Brazil. We utilized a qualitative and quantitative integrated analysis including on-site observations, interviews, medical records analysis, Qualitative Comparative Analysis (QCA) and System Dynamics Modeling (SD). Main cause of delays were categorized into three themes: a) professional, b) equipment and c) transportation logistics. QCA analysis confirmed four main stages of delay to STEMI patient's care in relation to the 'Door-in-Door-out' time at the primary hospital. These stages and their average delays in minutes were: a) First Medical Contact (From Door-In to the first contact with the nurse and/or physician): 7 minutes; b) Electrocardiogram acquisition and review by a physician: 28 minutes; c) ECG transmission and Percutaneous Coronary Intervention Center team feedback time: 76 minutes; and d) Patient's Transfer Waiting Time: 78 minutes. SD baseline model confirmed the system's behavior with all occurring delays and the need of improvements. Moreover, after model validation and sensitivity analysis, results suggested that an overall improvement of 40% to 50% in each of these identified stages would reduce the delay. This evaluation suggests that investment in health personnel training, diminution of bureaucracy, and management of guidelines might lead to important improvements decreasing the delay of STEMI patients' care. In addition, this work provides evidence that SD modeling may highlight areas where health system managers can implement and evaluate the necessary changes in order to improve the process of care.
de Andrade, Luciano; Lynch, Catherine; Carvalho, Elias; Rodrigues, Clarissa Garcia; Vissoci, João Ricardo Nickenig; Passos, Guttenberg Ferreira; Pietrobon, Ricardo; Nihei, Oscar Kenji; de Barros Carvalho, Maria Dalva
2014-01-01
Background Mortality rates amongst ST segment elevation myocardial infarction (STEMI) patients remain high, especially in developing countries. The aim of this study was to evaluate the factors related with delays in the treatment of STEMI patients to support a strategic plan toward structural and personnel modifications in a primary hospital aligning its process with international guidelines. Methods and Findings The study was conducted in a primary hospital localized in Foz do Iguaçu, Brazil. We utilized a qualitative and quantitative integrated analysis including on-site observations, interviews, medical records analysis, Qualitative Comparative Analysis (QCA) and System Dynamics Modeling (SD). Main cause of delays were categorized into three themes: a) professional, b) equipment and c) transportation logistics. QCA analysis confirmed four main stages of delay to STEMI patient’s care in relation to the ‘Door-in-Door-out’ time at the primary hospital. These stages and their average delays in minutes were: a) First Medical Contact (From Door-In to the first contact with the nurse and/or physician): 7 minutes; b) Electrocardiogram acquisition and review by a physician: 28 minutes; c) ECG transmission and Percutaneous Coronary Intervention Center team feedback time: 76 minutes; and d) Patient’s Transfer Waiting Time: 78 minutes. SD baseline model confirmed the system’s behavior with all occurring delays and the need of improvements. Moreover, after model validation and sensitivity analysis, results suggested that an overall improvement of 40% to 50% in each of these identified stages would reduce the delay. Conclusions This evaluation suggests that investment in health personnel training, diminution of bureaucracy, and management of guidelines might lead to important improvements decreasing the delay of STEMI patients’ care. In addition, this work provides evidence that SD modeling may highlight areas where health system managers can implement and evaluate the necessary changes in order to improve the process of care. PMID:25079362
Analyzing developmental processes on an individual level using nonstationary time series modeling.
Molenaar, Peter C M; Sinclair, Katerina O; Rovine, Michael J; Ram, Nilam; Corneal, Sherry E
2009-01-01
Individuals change over time, often in complex ways. Generally, studies of change over time have combined individuals into groups for analysis, which is inappropriate in most, if not all, studies of development. The authors explain how to identify appropriate levels of analysis (individual vs. group) and demonstrate how to estimate changes in developmental processes over time using a multivariate nonstationary time series model. They apply this model to describe the changing relationships between a biological son and father and a stepson and stepfather at the individual level. The authors also explain how to use an extended Kalman filter with iteration and smoothing estimator to capture how dynamics change over time. Finally, they suggest further applications of the multivariate nonstationary time series model and detail the next steps in the development of statistical models used to analyze individual-level data.
Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.; ...
2017-01-24
An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.
An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less
Preliminary study of soil permeability properties using principal component analysis
NASA Astrophysics Data System (ADS)
Yulianti, M.; Sudriani, Y.; Rustini, H. A.
2018-02-01
Soil permeability measurement is undoubtedly important in carrying out soil-water research such as rainfall-runoff modelling, irrigation water distribution systems, etc. It is also known that acquiring reliable soil permeability data is rather laborious, time-consuming, and costly. Therefore, it is desirable to develop the prediction model. Several studies of empirical equations for predicting permeability have been undertaken by many researchers. These studies derived the models from areas which soil characteristics are different from Indonesian soil, which suggest a possibility that these permeability models are site-specific. The purpose of this study is to identify which soil parameters correspond strongly to soil permeability and propose a preliminary model for permeability prediction. Principal component analysis (PCA) was applied to 16 parameters analysed from 37 sites consist of 91 samples obtained from Batanghari Watershed. Findings indicated five variables that have strong correlation with soil permeability, and we recommend a preliminary permeability model, which is potential for further development.
Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li
2013-01-21
A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.
Oxygen Diffusion and Reaction Kinetics in Continuous Fiber Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Halbig, Michael C.; Eckel, Andrew J.; Cawley, James D.
1999-01-01
Previous stressed oxidation tests of C/SiC composites at elevated temperatures (350 C to 1500 C) and sustained stresses (69 MPa and 172 MPa) have led to the development of a finite difference cracked matrix model. The times to failure in the samples suggest oxidation occurred in two kinetic regimes defined by the rate controlling mechanisms (i.e. diffusion controlled and reaction controlled kinetics). Microstructural analysis revealed preferential oxidation along as-fabricated, matrix microcracks and also suggested two regimes of oxidation kinetics dependent on the oxidation temperature. Based on experimental results, observation, and theory, a finite difference model was developed. The model simulates the diffusion of oxygen into a matrix crack bridged by carbon fibers. The model facilitates the study of the relative importance of temperature, the reaction rate constant, and the diffusion coefficient on the overall oxidation kinetics.
Hortin, Mitchell S; Bowden, Anton E
2016-11-01
Data has been published that quantifies the nonlinear, anisotropic material behaviour and pre-strain behaviour of the anterior longitudinal, supraspinous (SSL), and interspinous ligaments of the human lumbar spine. Additionally, data has been published on localized material properties of the SSL. These results have been incrementally incorporated into a previously validated finite element model of the human lumbar spine. Results suggest that the effects of increased ligament model fidelity on bone strain energy were moderate and the effects on disc pressure were slight, and do not justify a change in modelling strategy for most clinical applications. There were significant effects on the ligament stresses of the ligaments that were directly modified, suggesting that these phenomena should be included in FE models where ligament stresses are the desired metric.
The modeling and analysis of the word-of-mouth marketing
NASA Astrophysics Data System (ADS)
Li, Pengdeng; Yang, Xiaofan; Yang, Lu-Xing; Xiong, Qingyu; Wu, Yingbo; Tang, Yuan Yan
2018-03-01
As compared to the traditional advertising, word-of-mouth (WOM) communications have striking advantages such as significantly lower cost and much faster propagation, and this is especially the case with the popularity of online social networks. This paper focuses on the modeling and analysis of the WOM marketing. A dynamic model, known as the SIPNS model, capturing the WOM marketing processes with both positive and negative comments is established. On this basis, a measure of the overall profit of a WOM marketing campaign is proposed. The SIPNS model is shown to admit a unique equilibrium, and the equilibrium is determined. The impact of different factors on the equilibrium of the SIPNS model is illuminated through theoretical analysis. Extensive experimental results suggest that the equilibrium is much likely to be globally attracting. Finally, the influence of different factors on the expected overall profit of a WOM marketing campaign is ascertained both theoretically and experimentally. Thereby, some promotion strategies are recommended. To our knowledge, this is the first time the WOM marketing is treated in this way.
Space system operations and support cost analysis using Markov chains
NASA Technical Reports Server (NTRS)
Unal, Resit; Dean, Edwin B.; Moore, Arlene A.; Fairbairn, Robert E.
1990-01-01
This paper evaluates the use of Markov chain process in probabilistic life cycle cost analysis and suggests further uses of the process as a design aid tool. A methodology is developed for estimating operations and support cost and expected life for reusable space transportation systems. Application of the methodology is demonstrated for the case of a hypothetical space transportation vehicle. A sensitivity analysis is carried out to explore the effects of uncertainty in key model inputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zocher, Marvin Anthony; Hammerberg, James Edward
The experiments of Juanicotena and Szarynski, namely T101, T102, and T105 are modeled for purposes of gaining a better understanding of the FLAG friction model frictmultiscale2. This exercise has been conducted as a first step toward model validation. It is shown that with inclusion of the friction model in the numerical analysis, the results of Juanicotena and Szarynski are predicted reasonably well. Without the friction model, simulation results do not match the experimental data nearly as well. Suggestions for follow-on work are included.
Zeng, Xiang Xia; Tang, Yunliang; Hu, Kaixiang; Zhou, Xi; Wang, Jiao; Zhu, Lingyan; Liu, Jianying; Xu, Jixiong
2018-01-01
Abstract Background: To investigate the efficacy of febuxostat in hyperuricemic patients with chronic kidney disease (CKD), relevant randomized clinical trials (RCTs) were analyzed. Methods: We used PubMed, Medline, ISI Web of Science, CBMdisc, and Cochrane Library databases to conduct a systematic literature research. A fixed-effects model was used to evaluate the standardized mean differences (SMDs) with 95% confidence intervals (CIs). We conducted subgroup analysis, sensitivity analysis, and analyzed publication bias, to comprehensively estimate the renoprotective effects of febuxostat in hyperuricemic patients with CKD. Results: Among 296 retrieved studies, 5 relevant RCTs were included in the meta-analysis. The result showed that serum estimated glomerular filtration rate (eGFR) was improved after febuxostat treatment in hyperuricemic patients with CKD, with an SMD (95% CI) of 0.24 [−0.17 to 0.43] and P = .67 (fixed-effects model). No heterogeneity was observed across studies (I2 = 0% and P = .67). Subgroup analysis suggested that treatment-related reductions in serum eGFR levels were not related to drug doses, intervention times, or region. Conclusions: The present meta-analysis suggests that febuxostat may slow the progression of mild-to-moderate CKD. Given the limited number of included studies, additional large sample-size RCTs are required to determine the long-term renoprotective effects of febuxostat in hyperuricemic patients with CKD. PMID:29595642
Zeng, Xiang Xia; Tang, Yunliang; Hu, Kaixiang; Zhou, Xi; Wang, Jiao; Zhu, Lingyan; Liu, Jianying; Xu, Jixiong
2018-03-01
To investigate the efficacy of febuxostat in hyperuricemic patients with chronic kidney disease (CKD), relevant randomized clinical trials (RCTs) were analyzed. We used PubMed, Medline, ISI Web of Science, CBMdisc, and Cochrane Library databases to conduct a systematic literature research. A fixed-effects model was used to evaluate the standardized mean differences (SMDs) with 95% confidence intervals (CIs). We conducted subgroup analysis, sensitivity analysis, and analyzed publication bias, to comprehensively estimate the renoprotective effects of febuxostat in hyperuricemic patients with CKD. Among 296 retrieved studies, 5 relevant RCTs were included in the meta-analysis. The result showed that serum estimated glomerular filtration rate (eGFR) was improved after febuxostat treatment in hyperuricemic patients with CKD, with an SMD (95% CI) of 0.24 [-0.17 to 0.43] and P = .67 (fixed-effects model). No heterogeneity was observed across studies (I = 0% and P = .67). Subgroup analysis suggested that treatment-related reductions in serum eGFR levels were not related to drug doses, intervention times, or region. The present meta-analysis suggests that febuxostat may slow the progression of mild-to-moderate CKD. Given the limited number of included studies, additional large sample-size RCTs are required to determine the long-term renoprotective effects of febuxostat in hyperuricemic patients with CKD.
Chen, Kaiyuan; Wang, Na; Zhang, Jie; Hong, Xiaohong; Xu, Haiyun; Zhao, Xiaofeng; Huang, Qingjun
2017-06-01
Although emerging evidence has suggested an association between the Val66Met (rs6265) polymorphisms in brain-derived neurotrophic factor (BDNF) gene and the panic disorder, the conclusion is inclusive given the mixed results. This meta-analysis reviewed and analyzed the recent studies addressing the potential association between the Val66Met polymorphisms and panic disorder susceptibility. Related case-control studies were retrieved by database searching and selected according to established inclusion criteria. Six articles were identified, which explored the association between the BDNF Val66Met polymorphism and panic disorder. Statistical analyses revealed no association for the allele contrast and the dominant model. However, the recessive model showed a significant association between the BDNF Val66Met polymorphism and panic disorder (odds ratio = 1.26, 95% confidence interval = 1.04-1.52, z = 2.39, P = 0.02). Despite of some limitations, this meta-analysis suggests that the Val66Met polymorphism of BDNF gene is a susceptibility factor for panic disorder. © 2015 Wiley Publishing Asia Pty Ltd.
Peeters, Yvette; Boersma, Sandra N; Koopman, Hendrik M
2008-01-01
Background Aim of this study is to further explore predictors of health related quality of life in children with asthma using factors derived from to the extended stress-coping model. While the stress-coping model has often been used as a frame of reference in studying health related quality of life in chronic illness, few have actually tested the model in children with asthma. Method In this survey study data were obtained by means of self-report questionnaires from seventy-eight children with asthma and their parents. Based on data derived from these questionnaires the constructs of the extended stress-coping model were assessed, using regression analysis and path analysis. Results The results of both regression analysis and path analysis reveal tentative support for the proposed relationships between predictors and health related quality of life in the stress-coping model. Moreover, as indicated in the stress-coping model, HRQoL is only directly predicted by coping. Both coping strategies 'emotional reaction' (significantly) and 'avoidance' are directly related to HRQoL. Conclusion In children with asthma, the extended stress-coping model appears to be a useful theoretical framework for understanding the impact of the illness on their quality of life. Consequently, the factors suggested by this model should be taken into account when designing optimal psychosocial-care interventions. PMID:18366753
April 2013 MOVES Model Review Work Group Meeting Materials
Presentations from the meeting on April 30th of 2013 include a focus on the next version of MOtor Vehicle Emission Simulator (MOVES), evaluating proposed data sources and analysis methods, and commenting on or suggesting features or enhancements.
Independent and cooperative motions of the Kv1.2 channel: voltage sensing and gating.
Yeheskel, Adva; Haliloglu, Turkan; Ben-Tal, Nir
2010-05-19
Voltage-gated potassium (Kv) channels, such as Kv1.2, are involved in the generation and propagation of action potentials. The Kv channel is a homotetramer, and each monomer is composed of a voltage-sensing domain (VSD) and a pore domain (PD). We analyzed the fluctuations of a model structure of Kv1.2 using elastic network models. The analysis suggested a network of coupled fluctuations of eight rigid structural units and seven hinges that may control the transition between the active and inactive states of the channel. For the most part, the network is composed of amino acids that are known to affect channel activity. The results suggested allosteric interactions and cooperativity between the subunits in the coupling between the motion of the VSD and the selectivity filter of the PD, in accordance with recent empirical data. There are no direct contacts between the VSDs of the four subunits, and the contacts between these and the PDs are loose, suggesting that the VSDs are capable of functioning independently. Indeed, they manifest many inherent fluctuations that are decoupled from the rest of the structure. In general, the analysis suggests that the two domains contribute to the channel function both individually and cooperatively. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Laplace transform analysis of a multiplicative asset transfer model
NASA Astrophysics Data System (ADS)
Sokolov, Andrey; Melatos, Andrew; Kieu, Tien
2010-07-01
We analyze a simple asset transfer model in which the transfer amount is a fixed fraction f of the giver’s wealth. The model is analyzed in a new way by Laplace transforming the master equation, solving it analytically and numerically for the steady-state distribution, and exploring the solutions for various values of f∈(0,1). The Laplace transform analysis is superior to agent-based simulations as it does not depend on the number of agents, enabling us to study entropy and inequality in regimes that are costly to address with simulations. We demonstrate that Boltzmann entropy is not a suitable (e.g. non-monotonic) measure of disorder in a multiplicative asset transfer system and suggest an asymmetric stochastic process that is equivalent to the asset transfer model.
Initiation and Modification of Reaction by Energy Addition: Kinetic and Transport Phenomena
1990-10-01
ignition- delay time ranges from about 2 to 100 ps. The results of a computer- modeling calcu- lation of the chemical kinetics suggest that the...Page PROGRAM INFORMATION iii 1.0 RESEARCH OBJECTIVES 2.0 ANALYSIS 2 3.0 EXPERIMENT 7 REFERENCES 8 APPENDIX I. Evaluating a Simple Model for Laminar...Flame-Propagation I-1 Rates. I. Planar Geometry. APPENDIX II. Evaluating a Simple Model for Laminar-Flame-Propagation II-1 Rates. II. Spherical
Detection of stress factors in crop and weed species using hyperspectral remote sensing reflectance
NASA Astrophysics Data System (ADS)
Henry, William Brien
The primary objective of this work was to determine if stress factors such as moisture stress or herbicide injury stress limit the ability to distinguish between weeds and crops using remotely sensed data. Additional objectives included using hyperspectral reflectance data to measure moisture content within a species, and to measure crop injury in response to drift rates of non-selective herbicides. Moisture stress did not reduce the ability to discriminate between species. Regardless of analysis technique, the trend was that as moisture stress increased, so too did the ability to distinguish between species. Signature amplitudes (SA) of the top 5 bands, discrete wavelet transforms (DWT), and multiple indices were promising analysis techniques. Discriminant models created from one year's data set and validated on additional data sets provided, on average, approximately 80% accurate classification among weeds and crop. This suggests that these models are relatively robust and could potentially be used across environmental conditions in field scenarios. Distinguishing between leaves grown at high-moisture stress and no-stress was met with limited success, primarily because there was substantial variation among samples within the treatments. Leaf water potential (LWP) was measured, and these were classified into three categories using indices. Classification accuracies were as high as 68%. The 10 bands most highly correlated to LWP were selected; however, there were no obvious trends or patterns in these top 10 bands with respect to time, species or moisture level, suggesting that LWP is an elusive parameter to quantify spectrally. In order to address herbicide injury stress and its impact on species discrimination, discriminant models were created from combinations of multiple indices. The model created from the second experimental run's data set and validated on the first experimental run's data provided an average of 97% correct classification of soybean and an overall average classification accuracy of 65% for all species. This suggests that these models are relatively robust and could potentially be used across a wide range of herbicide applications in field scenarios. From the pooled data set, a single discriminant model was created with multiple indices that discriminated soybean from weeds 88%, on average, regardless of herbicide, rate or species. Several analysis techniques including multiple indices, signature amplitude with spectral bands as features, and wavelet analysis were employed to distinguish between herbicide-treated and nontreated plants. Classification accuracy using signature amplitude (SA) analysis of paraquat injury on soybean was better than 75% for both 1/2 and 1/8X rates at 1, 4, and 7 DAA. Classification accuracy of paraquat injury on corn was better than 72% for the 1/2X rate at 1, 4, and 7 DAA. These data suggest that hyperspectral reflectance may be used to distinguish between healthy plants and injured plants to which herbicides have been applied; however, the classification accuracies remained at 75% or higher only when the higher rates of herbicide were applied. (Abstract shortened by UMI.)
Arbitrary Steady-State Solutions with the K-epsilon Model
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Pettersson Reif, B. A.; Gatski, Thomas B.
2006-01-01
Widely-used forms of the K-epsilon turbulence model are shown to yield arbitrary steady-state converged solutions that are highly dependent on numerical considerations such as initial conditions and solution procedure. These solutions contain pseudo-laminar regions of varying size. By applying a nullcline analysis to the equation set, it is possible to clearly demonstrate the reasons for the anomalous behavior. In summary, the degenerate solution acts as a stable fixed point under certain conditions, causing the numerical method to converge there. The analysis also suggests a methodology for preventing the anomalous behavior in steady-state computations.
Vector space methods of photometric analysis - Applications to O stars and interstellar reddening
NASA Technical Reports Server (NTRS)
Massa, D.; Lillie, C. F.
1978-01-01
A multivariate vector-space formulation of photometry is developed which accounts for error propagation. An analysis of uvby and H-beta photometry of O stars is presented, with attention given to observational errors, reddening, general uvby photometry, early stars, and models of O stars. The number of observable parameters in O-star continua is investigated, the way these quantities compare with model-atmosphere predictions is considered, and an interstellar reddening law is derived. It is suggested that photospheric expansion affects the formation of the continuum in at least some O stars.
Crises, noise, and tipping in the Hassell population model
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina
2018-03-01
We consider a problem of the analysis of the noise-induced tipping in population systems. To study this phenomenon, we use Hassell-type system with Allee effect as a conceptual model. A mathematical investigation of the tipping is connected with the analysis of the crisis bifurcations, both boundary and interior. In the parametric study of the abrupt changes in dynamics related to the noise-induced extinction and transition from order to chaos, the stochastic sensitivity function technique and confidence domains are used. The effectiveness of the suggested approach to detect early warnings of critical stochastic transitions is demonstrated.
Shift work, night work, and the risk of prostate cancer: A meta-analysis based on 9 cohort studies.
Du, Hong-Bing; Bin, Kai-Yun; Liu, Wen-Hong; Yang, Feng-Sheng
2017-11-01
Epidemiology studies suggested that shift work or night work may be linked to prostate cancer (PCa); the relationship, however, remains controversy. PubMed, ScienceDirect, and Embase (Ovid) databases were searched before (started from the building of the databases) February 4, 2017 for eligible cohort studies. We pooled the evidence included by a random- or fixed-effect model, according to the heterogeneity. A predefined subgroup analysis was conducted to see the potential discrepancy between groups. Sensitivity analysis was used to test whether our results were stale. Nine cohort studies were eligible for meta-analysis with 2,570,790 male subjects. Our meta-analysis showed that, under the fixed-effect model, the pooled relevant risk (RR) of PCa was 1.05 (95% confidence interval [CI]: 1.00, 1.11; P = .06; I = 24.00%) for men who had ever engaged in night shift work; and under the random-effect model, the pooled RR was 1.08 (0.99, 1.17; P = .08; I = 24.00%). Subgroup analysis showed the RR of PCa among males in western countries was 1.05 (95% CI: 0.99, 1.11; P = .09; I = 0.00%), while among Asian countries it was 2.45 (95% CI: 1.19, 5.04; P = .02; I = 0.00%); and the RR was 1.04 (95% CI: 0.95, 1.14; P = .40; I = 29.20%) for the high-quality group compared with 1.21 (95% CI: 1.03, 1.41; P = .02; I = 0.00%) for the moderate/low-quality group. Sensitivity analysis showed robust results. Based on the current evidence of cohort studies, we found no obvious association between night shift work and PCa. However, our subgroup analysis suggests that night shift work may increase the risk of PCa in Asian men. Some evidence of a small study effect was observed in this meta-analysis.
Vibration test of 1/5 scale H-II launch vehicle
NASA Astrophysics Data System (ADS)
Morino, Yoshiki; Komatsu, Keiji; Sano, Masaaki; Minegishi, Masakatsu; Morita, Toshiyuki; Kohsetsu, Y.
In order to predict dynamic loads on the newly designed Japanese H-II launch vehicle, the adequacy of prediction methods has been assessed by the dynamic scale model testing. The three-dimensional dynamic model was used in the analysis to express coupling effects among axial, lateral (pitch and yaw) and torsional vibrations. The liquid/tank interaction was considered by use of a boundary element method. The 1/5 scale model of the H-II launch vehicle was designed to simulate stiffness and mass properties of important structural parts, such as core/SRB junctions, first and second stage Lox tanks and engine mount structures. Modal excitation of the test vehicle was accomplished with 100-1000 N shakers which produced random or sinusoidal vibrational forces. The vibrational response of the test vehicle was measured at various locations with accelerometers and pressure sensor. In the lower frequency range, corresmpondence between analysis and experiment was generally good. The basic procedures in analysis seem to be adequate so far, but some improvements in mathematical modeling are suggested by comparison of test and analysis.
Cao, Yuezhou; Chen, Weixian; Qian, Yun; Zeng, Yanying; Liu, Wenhua
2014-12-01
The guanosine insertion/deletion polymorphism (4G/5G) of plasminogen activator inhibitor-1 (PAI-1) gene has been suggested as a risk factor for ischemic stroke (IS), but direct evidence from genetic association studies remains inconclusive even in Chinese population. Therefore, we performed a meta-analysis to evaluate this association. All of the relevant studies were identified from PubMed, Embase, Chinese National Knowledge Infrastructure database and Chinese Wanfang database up to September 2013. Statistical analyses were conducted with Revman 5.2 and STATA 12.0 software. Odds ratio (OR) with 95% confidence interval (CI) values were applied to evaluate the strength of the association. Heterogeneity was evaluated by Q-test and the I² statistic. The Begg's test and Egger's test were used to assess the publication bias. A significant association and a borderline association between the PAI-1 4G/5G polymorphism and IS were found under the recessive model (OR = 1.639, 95% CI = 1.136-2.364) and allelic model (OR = 1.256, 95% CI = 1.000-1.578), respectively. However, no significant association was observed under homogeneous comparison model (OR = 1.428, 95% CI = 0.914-2.233), heterogeneous comparison model (OR = 0.856, 95% CI = 0.689-1.063) and dominant model (OR = 1.036, 95% CI = 0.846-1.270). This meta-analysis suggested that 4G4G genotype of PAI-1 4G/5G polymorphism might be a risk factor for IS in the Chinese population.
NASA Astrophysics Data System (ADS)
Jianjun, X.; Bingjie, Y.; Rongji, W.
2018-03-01
The purpose of this paper was to improve catastrophe insurance level. Firstly, earthquake predictions were carried out using mathematical analysis method. Secondly, the foreign catastrophe insurances’ policies and models were compared. Thirdly, the suggestions on catastrophe insurances to China were discussed. The further study should be paid more attention on the earthquake prediction by introducing big data.
Limits to Cloud Susceptibility
NASA Technical Reports Server (NTRS)
Coakley, James A., Jr.
2002-01-01
1-kilometer AVHRR observations of ship tracks in low-level clouds off the west coast of the U S. were used to determine limits for the degree to which clouds might be altered by increases in anthropogenic aerosols. Hundreds of tracks were analyzed to determine whether the changes in droplet radii, visible optical depths, and cloud top altitudes that result from the influx of particles from underlying ships were consistent with expectations based on simple models for the indirect effect of aerosols. The models predict substantial increases in sunlight reflected by polluted clouds due to the increases in droplet numbers and cloud liquid water that result from the elevated particle concentrations. Contrary to the model predictions, the analysis of ship tracks revealed a 15-20% reduction in liquid water for the polluted clouds. Studies performed with a large-eddy cloud simulation model suggested that the shortfall in cloud liquid water found in the satellite observations might be attributed to the restriction that the 1-kilometer pixels be completely covered by either polluted or unpolluted cloud. The simulation model revealed that a substantial fraction of the indirect effect is caused by a horizontal redistribution of cloud water in the polluted clouds. Cloud-free gaps in polluted clouds fill in with cloud water while the cloud-free gaps in the surrounding unpolluted clouds remain cloud-free. By limiting the analysis to only overcast pixels, the current study failed to account for the gap-filling predicted by the simulation model. This finding and an analysis of the spatial variability of marine stratus suggest new ways to analyze ship tracks to determine the limit to which particle pollution will alter the amount of sunlight reflected by clouds.
Boriollo, Marcelo Fabiano Gomes; Rosa, Edvaldo Antonio Ribeiro; Gonçalves, Reginaldo Bruno; Höfling, José Francisco
2006-03-01
The typing of C. albicans by MLEE (multilocus enzyme electrophoresis) is dependent on the interpretation of enzyme electrophoretic patterns, and the study of the epidemiological relationships of these yeasts can be conducted by cluster analysis. Therefore, the aims of the present study were to first determine the discriminatory power of genetic interpretation (deduction of the allelic composition of diploid organisms) and numerical interpretation (mere determination of the presence and absence of bands) of MLEE patterns, and then to determine the concordance (Pearson product-moment correlation coefficient) and similarity (Jaccard similarity coefficient) of the groups of strains generated by three cluster analysis models, and the discriminatory power of such models as well [model A: genetic interpretation, genetic distance matrix of Nei (d(ij)) and UPGMA dendrogram; model B: genetic interpretation, Dice similarity matrix (S(D1)) and UPGMA dendrogram; model C: numerical interpretation, Dice similarity matrix (S(D2)) and UPGMA dendrogram]. MLEE was found to be a powerful and reliable tool for the typing of C. albicans due to its high discriminatory power (>0.9). Discriminatory power indicated that numerical interpretation is a method capable of discriminating a greater number of strains (47 versus 43 subtypes), but also pointed to model B as a method capable of providing a greater number of groups, suggesting its use for the typing of C. albicans by MLEE and cluster analysis. Very good agreement was only observed between the elements of the matrices S(D1) and S(D2), but a large majority of the groups generated in the three UPGMA dendrograms showed similarity S(J) between 4.8% and 75%, suggesting disparities in the conclusions obtained by the cluster assays.
NASA Astrophysics Data System (ADS)
Werner, Sonja; Förtsch, Christian; Boone, William; von Kotzebue, Lena; Neuhaus, Birgit J.
2017-07-01
To obtain a general understanding of science, model use as part of National Education Standards is important for instruction. Model use can be characterized by three aspects: (1) the characteristics of the model, (2) the integration of the model into instruction, and (3) the use of models to foster scientific reasoning. However, there were no empirical results describing the implementation of National Education Standards in science instruction concerning the use of models. Therefore, the present study investigated the implementation of different aspects of model use in German biology instruction. Two biology lessons on the topic neurobiology in grade nine of 32 biology teachers were videotaped (N = 64 videos). These lessons were analysed using an event-based coding manual according to three aspects of model described above. Rasch analysis of the coded categories was conducted and showed reliable measurement. In the first analysis, we identified 68 lessons where a total of 112 different models were used. The in-depth analysis showed that special aspects of an elaborate model use according to several categories of scientific reasoning were rarely implemented in biology instruction. A critical reflection of the used model (N = 25 models; 22.3%) and models to demonstrate scientific reasoning (N = 26 models; 23.2%) were seldom observed. Our findings suggest that pre-service biology teacher education and professional development initiatives in Germany have to focus on both aspects.
Marc-Andre Parisien; Sean A. Parks; Meg A. Krawchuk; John M. Little; Mike D. Flannigan; Lynn M. Gowman; Max A. Moritz
2014-01-01
Fire regimes of the Canadian boreal forest are driven by certain environmental factors that are highly variable from year to year (e.g., temperature, precipitation) and others that are relatively stable (e.g., land cover, topography). Studies examining the relative influence of these environmental drivers on fire activity suggest that models making explicit use of...
Flow Reactor Studies with Nanosecond Pulsed Discharges at Atmospheric Pressure and Higher
2013-10-01
Experiment and model analysis of low temperature C2H4/N2/O2/Ar mixtures suggest intermediate formation of nitromethane . Formation of such nitro and...Large amount of nitromethane (CH3NO2) forms within the plasma region, by CH3+NO2(+M)=CH3NO2(+M). Downstream, CH3NO2 then decomposes. • Current model
ERIC Educational Resources Information Center
Spolsky, Bernard; And Others
The model attempts to map all relevant factors onto a single integrated structure and to suggest some of the interaction lines. Based on a hexagonal figure, each side represents a set of factors which may have a bearing on, or be affected by, the bilingual program's operation in a particular situation--psychological, sociological, economic,…
Nonlinear Poisson equation for heterogeneous media.
Hu, Langhua; Wei, Guo-Wei
2012-08-22
The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Using Persuasion Models to Identify Givers.
ERIC Educational Resources Information Center
Ferguson, Mary Ann; And Others
1986-01-01
Assesses the feasibility of and suggests using W. J. McGuire's information processing theory and cognitive response analysis theory in research studies to identify "givers"--those who are likely to contribute money and resources to charities or volunteer to aid philanthropic organizations. (SRT)
Li, Zhi; Milutinović, Dejan; Rosen, Jacob
2017-05-01
Reach-to-grasp arm postures differ from those in pure reaching because they are affected by grasp position/orientation, rather than simple transport to a position during a reaching motion. This paper investigates this difference via an analysis of experimental data collected on reaching and reach-to-grasp motions. A seven-degree-of-freedom (DOFs) kinematic arm model with the swivel angle is used for the motion analysis. Compared to a widely used anatomical arm model, this model distinguishes clearly the four grasping-relevant DOFs (GR-DOFs) that are affected by positions and orientations of the objects to be grasped. These four GR-DOFs include the swivel angle that measures the elbow rotation about the shoulder-wrist axis, and three wrist joint angles. For each GR-DOF, we quantify position vs orientation task-relevance bias that measures how much the DOF is affected by the grasping position vs orientation. The swivel angle and forearm supination have similar bias, and the analysis of their motion suggests two hypotheses regarding the synergistic coordination of the macro- and micro-structures of the human arm (1) DOFs with similar task-relevance are synergistically coordinated; and (2) such synergy breaks when a task-relevant DOF is close to its joint limit without necessarily reaching the limit. This study provides a motion analysis method to reduce the control complexity for reach-to-grasp tasks, and suggests using dynamic coupling to coordinate the hand and arm of upper-limb exoskeletons.
Using models to manage systems subject to sustainability indicators
Hill, M.C.
2006-01-01
Mathematical and numerical models can provide insight into sustainability indicators using relevant simulated quantities, which are referred to here as predictions. To be useful, many concerns need to be considered. Four are discussed here: (a) mathematical and numerical accuracy of the model; (b) the accuracy of the data used in model development, (c) the information observations provide to aspects of the model important to predictions of interest as measured using sensitivity analysis; and (d) the existence of plausible alternative models for a given system. The four issues are illustrated using examples from conservative and transport modelling, and using conceptual arguments. Results suggest that ignoring these issues can produce misleading conclusions.
NASA Technical Reports Server (NTRS)
Estes, Samantha; Parker, Nelson C. (Technical Monitor)
2001-01-01
Virtual reality and simulation applications are becoming widespread in human task analysis. These programs have many benefits for the Human Factors Engineering field. Not only do creating and using virtual environments for human engineering analyses save money and time, this approach also promotes user experimentation and provides increased quality of analyses. This paper explains the human engineering task analysis performed on the Environmental Control and Life Support System (ECLSS) space station rack and its Distillation Assembly (DA) subsystem using EAI's human modeling simulation software, Jack. When installed on the International Space Station (ISS), ECLSS will provide the life and environment support needed to adequately sustain crew life. The DA is an Orbital Replaceable Unit (ORU) that provides means of wastewater (primarily urine from flight crew and experimental animals) reclamation. Jack was used to create a model of the weightless environment of the ISS Node 3, where the ECLSS is housed. Computer aided drawings of the ECLSS rack and DA system were also brought into the environment. Anthropometric models of a 95th percentile male and 5th percentile female were used to examine the human interfaces encountered during various ECLSS and DA tasks. The results of the task analyses were used in suggesting modifications to hardware and crew task procedures to improve accessibility, conserve crew time, and add convenience for the crew. This paper will address some of those suggested modifications and the method of presenting final analyses for requirements verification.
Chuang, Shu-Chun; Rota, Matteo; Gunter, Marc J; Zeleniuch-Jacquotte, Anne; Eussen, Simone J P M; Vollset, Stein Emil; Ueland, Per Magne; Norat, Teresa; Ziegler, Regina G; Vineis, Paolo
2013-10-01
Most epidemiologic studies on folate intake suggest that folate may be protective against colorectal cancer, but the results on circulating (plasma or serum) folate are mostly inconclusive. We conducted a meta-analysis of case-control studies nested within prospective studies on circulating folate and colorectal cancer risk by using flexible meta-regression models to test the linear and nonlinear dose-response relationships. A total of 8 publications (10 cohorts, representing 3,477 cases and 7,039 controls) were included in the meta-analysis. The linear and nonlinear models corresponded to relative risks of 0.96 (95% confidence interval (CI): 0.91, 1.02) and 0.99 (95% CI: 0.96, 1.02), respectively, per 10 nmol/L of circulating folate in contrast to the reference value. The pooled relative risks when comparing the highest with the lowest category were 0.80 (95% CI: 0.61, 0.99) for radioimmunoassay and 1.03 (95% CI: 0.83, 1.22) for microbiological assay. Overall, our analyses suggest a null association between circulating folate and colorectal cancer risk. The stronger association for the radioimmunoassay-based studies could reflect differences in cohorts and study designs rather than assay performance. Further investigations need to integrate more accurate measurements and flexible modeling to explore the effects of folate in the presence of genetic, lifestyle, dietary, and hormone-related factors.
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
Quantum scattering problem without partial-wave analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melezhik, V. S., E-mail: melezhik@theor.jinr.ru
2013-02-15
We have suggested a method for treating different quantum few-body dynamics without traditional using of the partial-wave analysis. It happened that this approach was very efficient in quantitative analysis of low-dimensional ultracold few-body systems arising in confined geometry of atomic traps. Here we discuss its application to a recently suggested mechanism of resonant molecule formation in confined two-component atomic mixture with transferring the energy release to the center-of-mass excitation of forming molecules. The author considers this result as one of the most significant in his scientific carrier which started from the model of resonant muonic molecule formation [S.I. Vinitsky etmore » al., Sov. Phys. JETP 47, 444 (1978)], one of the most citing works of S.I. Vinitsky.« less
[Determination of somatotype of man in cranio-facial personality identification].
2004-01-01
On the basis of their independent research and through the analysis of published data the authors suggested quantitative criteria for the diagnosis of a somatotype of man by the dimensional features of the face and skull. M. A. Negasheva method, based on the discriminative analysis of 7 measurement features, was used in the individual diagnosis of a somatotype by V. V. Bunaka scheme (somatotypes-pectoral, muscular, abdominal and indefinite). The authors suggest 2 diagnostic models based on the linear and discriminative analysis of 11 and 7 measurement features for the skull. The diagnostic accuracy in case of main male som-atotypes makes 87 and 64.4%, respectively, with the canonic correlations of 0.574 and 0.292. The designed methods can be used in forensic medicine for the cranio-facial and portrait expertise.
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
NASA Astrophysics Data System (ADS)
Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.
2015-03-01
Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.
A Quantitative Model of Early Atherosclerotic Plaques Parameterized Using In Vitro Experiments.
Thon, Moritz P; Ford, Hugh Z; Gee, Michael W; Myerscough, Mary R
2018-01-01
There are a growing number of studies that model immunological processes in the artery wall that lead to the development of atherosclerotic plaques. However, few of these models use parameters that are obtained from experimental data even though data-driven models are vital if mathematical models are to become clinically relevant. We present the development and analysis of a quantitative mathematical model for the coupled inflammatory, lipid and macrophage dynamics in early atherosclerotic plaques. Our modeling approach is similar to the biologists' experimental approach where the bigger picture of atherosclerosis is put together from many smaller observations and findings from in vitro experiments. We first develop a series of three simpler submodels which are least-squares fitted to various in vitro experimental results from the literature. Subsequently, we use these three submodels to construct a quantitative model of the development of early atherosclerotic plaques. We perform a local sensitivity analysis of the model with respect to its parameters that identifies critical parameters and processes. Further, we present a systematic analysis of the long-term outcome of the model which produces a characterization of the stability of model plaques based on the rates of recruitment of low-density lipoproteins, high-density lipoproteins and macrophages. The analysis of the model suggests that further experimental work quantifying the different fates of macrophages as a function of cholesterol load and the balance between free cholesterol and cholesterol ester inside macrophages may give valuable insight into long-term atherosclerotic plaque outcomes. This model is an important step toward models applicable in a clinical setting.
Evaluation of a black-footed ferret resource utilization function model
Eads, D.A.; Millspaugh, J.J.; Biggins, D.E.; Jachowski, D.S.; Livieri, T.M.
2011-01-01
Resource utilization function (RUF) models permit evaluation of potential habitat for endangered species; ideally such models should be evaluated before use in management decision-making. We evaluated the predictive capabilities of a previously developed black-footed ferret (Mustela nigripes) RUF. Using the population-level RUF, generated from ferret observations at an adjacent yet distinct colony, we predicted the distribution of ferrets within a black-tailed prairie dog (Cynomys ludovicianus) colony in the Conata Basin, South Dakota, USA. We evaluated model performance, using data collected during post-breeding spotlight surveys (2007-2008) by assessing model agreement via weighted compositional analysis and count-metrics. Compositional analysis of home range use and colony-level availability, and core area use and home range availability, demonstrated ferret selection of the predicted Very high and High occurrence categories in 2007 and 2008. Simple count-metrics corroborated these findings and suggested selection of the Very high category in 2007 and the Very high and High categories in 2008. Collectively, these results suggested that the RUF was useful in predicting occurrence and intensity of space use of ferrets at our study site, the 2 objectives of the RUF. Application of this validated RUF would increase the resolution of habitat evaluations, permitting prediction of the distribution of ferrets within distinct colonies. Additional model evaluation at other sites, on other black-tailed prairie dog colonies of varying resource configuration and size, would increase understanding of influences upon model performance and the general utility of the RUF. ?? 2011 The Wildlife Society.
Sumi, A; Luo, T; Zhou, D; Yu, B; Kong, D; Kobayashi, N
2013-05-01
Viral hepatitis is recognized as one of the most frequently reported diseases, and especially in China, acute and chronic liver disease due to viral hepatitis has been a major public health problem. The present study aimed to analyse and predict surveillance data of infections of hepatitis A, B, C and E in Wuhan, China, by the method of time-series analysis (MemCalc, Suwa-Trast, Japan). On the basis of spectral analysis, fundamental modes explaining the underlying variation of the data for the years 2004-2008 were assigned. The model was calculated using the fundamental modes and the underlying variation of the data reproduced well. An extension of the model to the year 2009 could predict the data quantitatively. Our study suggests that the present method will allow us to model the temporal pattern of epidemics of viral hepatitis much more effectively than using the artificial neural network, which has been used previously.
Multivariate Longitudinal Analysis with Bivariate Correlation Test
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692
Parametric sensitivity analysis of leachate transport simulations at landfills.
Bou-Zeid, E; El-Fadel, M
2004-01-01
This paper presents a case study in simulating leachate generation and transport at a 2000 ton/day landfill facility and assesses leachate migration away from the landfill in order to control associated environmental impacts, particularly on groundwater wells down gradient of the site. The site offers unique characteristics in that it is a former quarry converted to a landfill and is planned to have refuse depths that could reach 100 m, making it one of the deepest in the world. Leachate quantity and potential percolation into the subsurface are estimated using the Hydrologic Evaluation of Landfill Performance (HELP) model. A three-dimensional subsurface model (PORFLOW) was adopted to simulate ground water flow and contaminant transport away from the site. A comprehensive sensitivity analysis to leachate transport control parameters was also conducted. Sensitivity analysis suggests that changes in partition coefficient, source strength, aquifer hydraulic conductivity, and dispersivity have the most significant impact on model output indicating that these parameters should be carefully selected when similar modeling studies are performed. Copyright 2004 Elsevier Ltd.
Quantitative analysis of intra-Golgi transport shows intercisternal exchange for all cargo
Dmitrieff, Serge; Rao, Madan; Sens, Pierre
2013-01-01
The mechanisms controlling the transport of proteins through the Golgi stack of mammalian and plant cells is the subject of intense debate, with two models, cisternal progression and intercisternal exchange, emerging as major contenders. A variety of transport experiments have claimed support for each of these models. We reevaluate these experiments using a single quantitative coarse-grained framework of intra-Golgi transport that accounts for both transport models and their many variants. Our analysis makes a definitive case for the existence of intercisternal exchange both for small membrane proteins and large protein complexes––this implies that membrane structures larger than the typical protein-coated vesicles must be involved in transport. Notwithstanding, we find that current observations on protein transport cannot rule out cisternal progression as contributing significantly to the transport process. To discriminate between the different models of intra-Golgi transport, we suggest experiments and an analysis based on our extended theoretical framework that compare the dynamics of transiting and resident proteins. PMID:24019488
Money, Eric S; Barton, Lauren E; Dawson, Joseph; Reckhow, Kenneth H; Wiesner, Mark R
2014-03-01
The adaptive nature of the Forecasting the Impacts of Nanomaterials in the Environment (FINE) Bayesian network is explored. We create an updated FINE model (FINEAgNP-2) for predicting aquatic exposure concentrations of silver nanoparticles (AgNP) by combining the expert-based parameters from the baseline model established in previous work with literature data related to particle behavior, exposure, and nano-ecotoxicology via parameter learning. We validate the AgNP forecast from the updated model using mesocosm-scale field data and determine the sensitivity of several key variables to changes in environmental conditions, particle characteristics, and particle fate. Results show that the prediction accuracy of the FINEAgNP-2 model increased approximately 70% over the baseline model, with an error rate of only 20%, suggesting that FINE is a reliable tool to predict aquatic concentrations of nano-silver. Sensitivity analysis suggests that fractal dimension, particle diameter, conductivity, time, and particle fate have the most influence on aquatic exposure given the current knowledge; however, numerous knowledge gaps can be identified to suggest further research efforts that will reduce the uncertainty in subsequent exposure and risk forecasts. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana
2017-09-01
A problem of the analysis of the noise-induced extinction in multidimensional population systems is considered. For the investigation of conditions of the extinction caused by random disturbances, a new approach based on the stochastic sensitivity function technique and confidence domains is suggested, and applied to tritrophic population model of interacting prey, predator and top predator. This approach allows us to analyze constructively the probabilistic mechanisms of the transition to the noise-induced extinction from both equilibrium and oscillatory regimes of coexistence. In this analysis, a method of principal directions for the reducing of the dimension of confidence domains is suggested. In the dispersion of random states, the principal subspace is defined by the ratio of eigenvalues of the stochastic sensitivity matrix. A detailed analysis of two scenarios of the noise-induced extinction in dependence on parameters of considered tritrophic system is carried out.
The Fate of Saharan Dust Across the Atlantic and Implications for a Central American Dust Barrier
NASA Technical Reports Server (NTRS)
Nowottnick, E.; Colarco, P.; da Silva, A.; Hlavka, D.; McGill, M.
2011-01-01
Saharan dust was observed over the Caribbean basin during the summer 2007 NASA Tropical Composition, Cloud, and Climate Coupling (TC4) field experiment. Airborne Cloud Physics Lidar (CPL) and satellite observations from MODIS suggest a barrier to dust transport across Central America into the eastern Pacific. We use the NASA GEOS-5 atmospheric transport model with online aerosol tracers to perform simulations of the TC4 time period in order to understand the nature of this barrier. Our simulations are driven by the Modem Era Retrospective-Analysis for Research and Applications (MERRA) meteorological analyses. We evaluate our baseline simulated dust distributions using MODIS and CALIOP satellite and ground-based AERONET sun photometer observations. GEOS-5 reproduces the observed location, magnitude, and timing of major dust events, but our baseline simulation does not develop as strong a barrier to dust transport across Central America as observations suggest. Analysis of the dust transport dynamics and lost processes suggest that while both mechanisms play a role in defining the dust transport barrier, loss processes by wet removal of dust are about twice as important as transport. Sensitivity analyses with our model showed that the dust barrier would not exist without convective scavenging over the Caribbean. The best agreement between our model and the observations was obtained when dust wet removal was parameterized to be more aggressive, treating the dust as we do hydrophilic aerosols.
Milner, Allison; Aitken, Zoe; Krnjacki, Lauren; Bentley, Rebecca; Blakely, Tony; LaMontagne, Anthony D; Kavanagh, Anne M
2015-09-01
Equity and fairness at work are associated with a range of organizational and health outcomes. Past research suggests that workers with disabilities experience inequity in the workplace. It is difficult to conclude whether the presence of disability is the reason for perceived unfair treatment due to the possible confounding of effect estimates by other demographic or socioeconomic factors. The data source was the Household, Income, and Labor Dynamics in Australia (HILDA) survey (2001-2012). Propensity for disability was calculated from logistic models including gender, age, education, country of birth, and father's occupational skill level as predictors. We then used nearest neighbor (on propensity score) matched analysis to match workers with disabilities to workers without disability. Results suggest that disability is independently associated with lower fairness of pay after controlling for confounding factors in the propensity score matched analysis; although results do suggest less than half a standard deviation difference, indicating small effects. Similar results were apparent in standard multivariable regression models and alternative propensity score analyses (stratification, covariate adjustment using the propensity score, and inverse probability of treatment weighting). Whilst neither multivariable regression nor propensity scores adjust for unmeasured confounding, and there remains the potential for other biases, similar results for the two methodological approaches to confounder adjustment provide some confidence of an independent association of disability with perceived unfairness of pay. Based on this, we suggest that the disparity in the perceived fairness of pay between people with and without disabilities may be explained by worse treatment of people with disabilities in the workplace.
Dynamic physiological modeling for functional diffuse optical tomography
Diamond, Solomon Gilbert; Huppert, Theodore J.; Kolehmainen, Ville; Franceschini, Maria Angela; Kaipio, Jari P.; Arridge, Simon R.; Boas, David A.
2009-01-01
Diffuse optical tomography (DOT) is a noninvasive imaging technology that is sensitive to local concentration changes in oxy- and deoxyhemoglobin. When applied to functional neuroimaging, DOT measures hemodynamics in the scalp and brain that reflect competing metabolic demands and cardiovascular dynamics. The diffuse nature of near-infrared photon migration in tissue and the multitude of physiological systems that affect hemodynamics motivate the use of anatomical and physiological models to improve estimates of the functional hemodynamic response. In this paper, we present a linear state-space model for DOT analysis that models the physiological fluctuations present in the data with either static or dynamic estimation. We demonstrate the approach by using auxiliary measurements of blood pressure variability and heart rate variability as inputs to model the background physiology in DOT data. We evaluate the improvements accorded by modeling this physiology on ten human subjects with simulated functional hemodynamic responses added to the baseline physiology. Adding physiological modeling with a static estimator significantly improved estimates of the simulated functional response, and further significant improvements were achieved with a dynamic Kalman filter estimator (paired t tests, n = 10, P < 0.05). These results suggest that physiological modeling can improve DOT analysis. The further improvement with the Kalman filter encourages continued research into dynamic linear modeling of the physiology present in DOT. Cardiovascular dynamics also affect the blood-oxygen-dependent (BOLD) signal in functional magnetic resonance imaging (fMRI). This state-space approach to DOT analysis could be extended to BOLD fMRI analysis, multimodal studies and real-time analysis. PMID:16242967
Understanding identifiability as a crucial step in uncertainty assessment
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.
2016-12-01
The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.
Analysis of a Chevron Beam Thermal Actuator
NASA Astrophysics Data System (ADS)
Joshi, Amey Sanjay; Mohammed, Hussain; Kulkarni, S. M., Dr.
2018-02-01
Thermal MEMS (Micro-Electro-Mechanical Systems) actuators and sensors have a wide range of applications. The chevron type thermal actuators comparatively show superior performance over other existing electrostatic and thermal actuators. This paper describes the design and analysis of chevron type thermal actuator. Here standard design of Chevron type thermal actuator is considered which comprises of proof mass at center and array of six beams of a uniform cross section of 3 3 microns and an initial angle of 5°. The thermal actuator was designed and analyzed using analytical and finite element method and the results were compared. The model was also analyzed for initial angles of 2.5° and 7.5°, and the results were compared with FEA model. The cross section of the beam was varied and the finite element analysis of all three models was compared to suggest the best suitable thermal actuator structure.
TWO-STAGE FRAGMENTATION FOR CLUSTER FORMATION: ANALYTICAL MODEL AND OBSERVATIONAL CONSIDERATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Nicole D.; Basu, Shantanu, E-mail: nwityk@uwo.ca, E-mail: basu@uwo.ca
2012-12-10
Linear analysis of the formation of protostellar cores in planar magnetic interstellar clouds shows that molecular clouds exhibit a preferred length scale for collapse that depends on the mass-to-flux ratio and neutral-ion collision time within the cloud. We extend this linear analysis to the context of clustered star formation. By combining the results of the linear analysis with a realistic ionization profile for the cloud, we find that a molecular cloud may evolve through two fragmentation events in the evolution toward the formation of stars. Our model suggests that the initial fragmentation into clumps occurs for a transcritical cloud onmore » parsec scales while the second fragmentation can occur for transcritical and supercritical cores on subparsec scales. Comparison of our results with several star-forming regions (Perseus, Taurus, Pipe Nebula) shows support for a two-stage fragmentation model.« less
Value flow mapping: Using networks to inform stakeholder analysis
NASA Astrophysics Data System (ADS)
Cameron, Bruce G.; Crawley, Edward F.; Loureiro, Geilson; Rebentisch, Eric S.
2008-02-01
Stakeholder theory has garnered significant interest from the corporate community, but has proved difficult to apply to large government programs. A detailed value flow exercise was conducted to identify the value delivery mechanisms among stakeholders for the current Vision for Space Exploration. We propose a method for capturing stakeholder needs that explicitly recognizes the outcomes required of the value creating organization. The captured stakeholder needs are then translated into input-output models for each stakeholder, which are then aggregated into a network model. Analysis of this network suggests that benefits are infrequently linked to the root provider of value. Furthermore, it is noted that requirements should not only be written to influence the organization's outputs, but also to influence the propagation of benefit further along the value chain. A number of future applications of this model to systems architecture and requirement analysis are discussed.
Haitsma, Jack J.; Furmli, Suleiman; Masoom, Hussain; Liu, Mingyao; Imai, Yumiko; Slutsky, Arthur S.; Beyene, Joseph; Greenwood, Celia M. T.; dos Santos, Claudia
2012-01-01
Objectives To perform a meta-analysis of gene expression microarray data from animal studies of lung injury, and to identify an injury-specific gene expression signature capable of predicting the development of lung injury in humans. Methods We performed a microarray meta-analysis using 77 microarray chips across six platforms, two species and different animal lung injury models exposed to lung injury with or/and without mechanical ventilation. Individual gene chips were classified and grouped based on the strategy used to induce lung injury. Effect size (change in gene expression) was calculated between non-injurious and injurious conditions comparing two main strategies to pool chips: (1) one-hit and (2) two-hit lung injury models. A random effects model was used to integrate individual effect sizes calculated from each experiment. Classification models were built using the gene expression signatures generated by the meta-analysis to predict the development of lung injury in human lung transplant recipients. Results Two injury-specific lists of differentially expressed genes generated from our meta-analysis of lung injury models were validated using external data sets and prospective data from animal models of ventilator-induced lung injury (VILI). Pathway analysis of gene sets revealed that both new and previously implicated VILI-related pathways are enriched with differentially regulated genes. Classification model based on gene expression signatures identified in animal models of lung injury predicted development of primary graft failure (PGF) in lung transplant recipients with larger than 80% accuracy based upon injury profiles from transplant donors. We also found that better classifier performance can be achieved by using meta-analysis to identify differentially-expressed genes than using single study-based differential analysis. Conclusion Taken together, our data suggests that microarray analysis of gene expression data allows for the detection of “injury" gene predictors that can classify lung injury samples and identify patients at risk for clinically relevant lung injury complications. PMID:23071521
Conformational analysis of a covalently cross-linked Watson-Crick base pair model.
Jensen, Erik A; Allen, Benjamin D; Kishi, Yoshito; O'Leary, Daniel J
2008-11-15
Low-temperature NMR experiments and molecular modeling have been used to characterize the conformational behavior of a covalently cross-linked DNA base pair model. The data suggest that Watson-Crick or reverse Watson-Crick hydrogen bonding geometries have similar energies and can interconvert at low temperatures. This low-temperature process involves rotation about the crosslink CH(2)C(5') (psi) carbon-carbon bond, which is energetically preferred over the alternate CH(2)N(3) (phi) carbon-nitrogen bond rotation.
Trojan War displayed as a full annihilation-diffusion-reaction model
NASA Astrophysics Data System (ADS)
Flores, J. C.
2017-02-01
The diffusive pair annihilation model with embedded topological domains and archaeological data is applied in an analysis of the hypothetical Trojan-Greek war during the late Bronze Age. Estimations of parameter are explicitly made for critical dynamics of the model. In particular, the 8-metre walls of Troy could be viewed as the effective shield that provided the technological difference between the two armies. Suggestively, the numbers in The Iliad are quite sound, being in accord with Lanchester's laws of warfare.
NASA Astrophysics Data System (ADS)
Burger, Liesl; Forbes, Andrew
2007-09-01
A simple model of a Porro prism laser resonator has been found to correctly predict the formation of the "petal" mode patterns typical of these resonators. A geometrical analysis of the petals suggests that these petals are the lowest-order modes of this type of resonator. Further use of the model reveals the formation of more complex beam patterns, and the nature of these patterns is investigated. Also, the output of stable and unstable resonator modes is presented.
Software for occupational health and safety risk analysis based on a fuzzy model.
Stefanovic, Miladin; Tadic, Danijela; Djapan, Marko; Macuzic, Ivan
2012-01-01
Risk and safety management are very important issues in healthcare systems. Those are complex systems with many entities, hazards and uncertainties. In such an environment, it is very hard to introduce a system for evaluating and simulating significant hazards. In this paper, we analyzed different types of hazards in healthcare systems and we introduced a new fuzzy model for evaluating and ranking hazards. Finally, we presented a developed software solution, based on the suggested fuzzy model for evaluating and monitoring risk.
Analysis of the free-fall behavior of liquid-metal drops in a gaseous atmosphere
NASA Technical Reports Server (NTRS)
Mccoy, J. Kevin; Markworth, Alan J.; Collings, E. W.; Brodkey, Robert S.
1987-01-01
The free-fall of a liquid-metal drop and heat transfer from the drop to its environment are described for both a gaseous atmosphere and vacuum. A simple model, in which the drop is assumed to fall rectilinearly with behavior like that of a rigid particle, is developed first, then possible causes of deviation from this behavior are discussed. The model is applied to describe solidification of drops in a drop tube. Possible future developments of the model are suggested.
Validation of the Integrated Medical Model Using Historical Space Flight Data
NASA Technical Reports Server (NTRS)
Kerstman, Eric L.; Minard, Charles G.; FreiredeCarvalho, Mary H.; Walton, Marlei E.; Myers, Jerry G., Jr.; Saile, Lynn G.; Lopez, Vilma; Butler, Douglas J.; Johnson-Throop, Kathy A.
2010-01-01
The Integrated Medical Model (IMM) utilizes Monte Carlo methodologies to predict the occurrence of medical events, utilization of resources, and clinical outcomes during space flight. Real-world data may be used to demonstrate the accuracy of the model. For this analysis, IMM predictions were compared to data from historical shuttle missions, not yet included as model source input. Initial goodness of fit test-ing on International Space Station data suggests that the IMM may overestimate the number of occurrences for three of the 83 medical conditions in the model. The IMM did not underestimate the occurrence of any medical condition. Initial comparisons with shuttle data demonstrate the importance of understanding crew preference (i.e., preferred analgesic) for accurately predicting the utilization of re-sources. The initial analysis demonstrates the validity of the IMM for its intended use and highlights areas for improvement.
Implicit theories of a desire for fame.
Maltby, John; Day, Liz; Giles, David; Gillett, Raphael; Quick, Marianne; Langcaster-James, Honey; Linley, P Alex
2008-05-01
The aim of the present studies was to generate implicit theories of a desire for fame among the general population. In Study 1, we were able to develop a nine-factor analytic model of conceptions of the desire to be famous that initially comprised nine separate factors; ambition, meaning derived through comparison with others, psychologically vulnerable, attention seeking, conceitedness, social access, altruistic, positive affect, and glamour. Analysis that sought to examine replicability among these factors suggested that three factors (altruistic, positive affect, and glamour) neither display factor congruence nor display adequate internal reliability. A second study examined the validity of these factors in predicting profiles of individuals who may desire fame. The findings from this study suggested that two of the nine factors (positive affect and altruism) could not be considered strong factors within the model. Overall, the findings suggest that implicit theories of a desire for fame comprise six factors. The discussion focuses on how an implicit model of a desire for fame might progress into formal theories of a desire for fame.
2012-08-08
Research, Fort Sam Houston, San Antonio, Texas, United States of America Abstract Introduction: The recent literature suggests that chronic wound...Introduction The management and treatment of chronic wounds continues to be a significant burden on the healthcare system [1–6]. The importance of bacterial...8, 2012 14. ABSTRACT Introduction: The recent literature suggests that chronic wound biofilms often consist of multiple bacterial species. However
NASA Astrophysics Data System (ADS)
Coletta, Vincent P.; Evans, Jonathan
2008-10-01
We analyze the motion of a gravity powered model race car on a downhill track of variable slope. Using a simple algebraic function to approximate the height of the track as a function of the distance along the track, and taking account of the rotational energy of the wheels, rolling friction, and air resistance, we obtain analytic expressions for the velocity and time of the car as functions of the distance traveled along the track. Photogates are used to measure the time at selected points along the track, and the measured values are in excellent agreement with the values predicted from theory. The design and analysis of model race cars provides a good application of principles of mechanics and suggests interesting projects for classes in introductory and intermediate mechanics.
SPITZER IRAC OBSERVATIONS OF IR EXCESS IN HOLMBERG IX X-1: A CIRCUMBINARY DISK OR A VARIABLE JET?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dudik, R. P.; Berghea, C. T.; Roberts, T. P.
2016-11-01
We present Spitzer Infrared Array Camera photometric observations of the ultraluminous X-ray source (ULX, X-1) in Holmberg IX. We construct a spectral energy distribution (SED) for Holmberg IX X-1 based on published optical, UV, and X-ray data combined with the IR data from this analysis. We modeled the X-ray and optical data with disk and stellar models; however, we find a clear IR excess in the ULX SED that cannot be explained by fits or extrapolations of any of these models. Instead, further analysis suggests that the IR excess results from dust emission, possibly a circumbinary disk, or a variablemore » jet.« less
Problem-Oriented Corporate Knowledge Base Models on the Case-Based Reasoning Approach Basis
NASA Astrophysics Data System (ADS)
Gluhih, I. N.; Akhmadulin, R. K.
2017-07-01
One of the urgent directions of efficiency enhancement of production processes and enterprises activities management is creation and use of corporate knowledge bases. The article suggests a concept of problem-oriented corporate knowledge bases (PO CKB), in which knowledge is arranged around possible problem situations and represents a tool for making and implementing decisions in such situations. For knowledge representation in PO CKB a case-based reasoning approach is encouraged to use. Under this approach, the content of a case as a knowledge base component has been defined; based on the situation tree a PO CKB knowledge model has been developed, in which the knowledge about typical situations as well as specific examples of situations and solutions have been represented. A generalized problem-oriented corporate knowledge base structural chart and possible modes of its operation have been suggested. The obtained models allow creating and using corporate knowledge bases for support of decision making and implementing, training, staff skill upgrading and analysis of the decisions taken. The universal interpretation of terms “situation” and “solution” adopted in the work allows using the suggested models to develop problem-oriented corporate knowledge bases in different subject domains. It has been suggested to use the developed models for making corporate knowledge bases of the enterprises that operate engineer systems and networks at large production facilities.
Forest-fire model as a supercritical dynamic model in financial systems
NASA Astrophysics Data System (ADS)
Lee, Deokjae; Kim, Jae-Young; Lee, Jeho; Kahng, B.
2015-02-01
Recently large-scale cascading failures in complex systems have garnered substantial attention. Such extreme events have been treated as an integral part of self-organized criticality (SOC). Recent empirical work has suggested that some extreme events systematically deviate from the SOC paradigm, requiring a different theoretical framework. We shed additional theoretical light on this possibility by studying financial crisis. We build our model of financial crisis on the well-known forest fire model in scale-free networks. Our analysis shows a nontrivial scaling feature indicating supercritical behavior, which is independent of system size. Extreme events in the supercritical state result from bursting of a fat bubble, seeds of which are sown by a protracted period of a benign financial environment with few shocks. Our findings suggest that policymakers can control the magnitude of financial meltdowns by keeping the economy operating within reasonable duration of a benign environment.
Ghaffarzadegan, Navid; Hawley, Joshua; Desai, Anand
2014-03-01
The US government has been increasingly supporting postdoctoral training in biomedical sciences to develop the domestic research workforce. However, current trends suggest that mostly international researchers benefit from the funding, many of whom might leave the USA after training. In this paper, we describe a model used to analyse the flow of national versus international researchers into and out of postdoctoral training. We calibrate our model in the case of the USA and successfully replicate the data. We use the model to conduct simulation-based analyses of effects of different policies on the diversity of postdoctoral researchers. Our model shows that capping the duration of postdoctoral careers, a policy proposed previously, favours international postdoctoral researchers. The analysis suggests that the leverage point to help the growth of domestic research workforce is in the pregraduate education area, and many policies implemented at the postgraduate level have minimal or unintended effects on diversity.
Shin, S M; Kim, Y-I; Choi, Y-S; Yamaguchi, T; Maki, K; Cho, B-H; Park, S-B
2015-01-01
To evaluate axial cervical vertebral (ACV) shape quantitatively and to build a prediction model for skeletal maturation level using statistical shape analysis for Japanese individuals. The sample included 24 female and 19 male patients with hand-wrist radiographs and CBCT images. Through generalized Procrustes analysis and principal components (PCs) analysis, the meaningful PCs were extracted from each ACV shape and analysed for the estimation regression model. Each ACV shape had meaningful PCs, except for the second axial cervical vertebra. Based on these models, the smallest prediction intervals (PIs) were from the combination of the shape space PCs, age and gender. Overall, the PIs of the male group were smaller than those of the female group. There was no significant correlation between centroid size as a size factor and skeletal maturation level. Our findings suggest that the ACV maturation method, which was applied by statistical shape analysis, could confirm information about skeletal maturation in Japanese individuals as an available quantifier of skeletal maturation and could be as useful a quantitative method as the skeletal maturation index.
Shin, S M; Choi, Y-S; Yamaguchi, T; Maki, K; Cho, B-H; Park, S-B
2015-01-01
Objectives: To evaluate axial cervical vertebral (ACV) shape quantitatively and to build a prediction model for skeletal maturation level using statistical shape analysis for Japanese individuals. Methods: The sample included 24 female and 19 male patients with hand–wrist radiographs and CBCT images. Through generalized Procrustes analysis and principal components (PCs) analysis, the meaningful PCs were extracted from each ACV shape and analysed for the estimation regression model. Results: Each ACV shape had meaningful PCs, except for the second axial cervical vertebra. Based on these models, the smallest prediction intervals (PIs) were from the combination of the shape space PCs, age and gender. Overall, the PIs of the male group were smaller than those of the female group. There was no significant correlation between centroid size as a size factor and skeletal maturation level. Conclusions: Our findings suggest that the ACV maturation method, which was applied by statistical shape analysis, could confirm information about skeletal maturation in Japanese individuals as an available quantifier of skeletal maturation and could be as useful a quantitative method as the skeletal maturation index. PMID:25411713
NASA Astrophysics Data System (ADS)
Wang, Xin; Li, Yan; Chen, Tongjun; Yan, Qiuyan; Ma, Li
2017-04-01
The thickness of tectonically deformed coal (TDC) has positive correlation associations with gas outbursts. In order to predict the TDC thickness of coal beds, we propose a new quantitative predicting method using an extreme learning machine (ELM) algorithm, a principal component analysis (PCA) algorithm, and seismic attributes. At first, we build an ELM prediction model using the PCA attributes of a synthetic seismic section. The results suggest that the ELM model can produce a reliable and accurate prediction of the TDC thickness for synthetic data, preferring Sigmoid activation function and 20 hidden nodes. Then, we analyze the applicability of the ELM model on the thickness prediction of the TDC with real application data. Through the cross validation of near-well traces, the results suggest that the ELM model can produce a reliable and accurate prediction of the TDC. After that, we use 250 near-well traces from 10 wells to build an ELM predicting model and use the model to forecast the TDC thickness of the No. 15 coal in the study area using the PCA attributes as the inputs. Comparing the predicted results, it is noted that the trained ELM model with two selected PCA attributes yields better predication results than those from the other combinations of the attributes. Finally, the trained ELM model with real seismic data have a different number of hidden nodes (10) than the trained ELM model with synthetic seismic data. In summary, it is feasible to use an ELM model to predict the TDC thickness using the calculated PCA attributes as the inputs. However, the input attributes, the activation function and the number of hidden nodes in the ELM model should be selected and tested carefully based on individual application.
Jo, Sunhwan; Lee, Hui Sun; Skolnick, Jeffrey; Im, Wonpil
2013-01-01
Understanding glycan structure and dynamics is central to understanding protein-carbohydrate recognition and its role in protein-protein interactions. Given the difficulties in obtaining the glycan's crystal structure in glycoconjugates due to its flexibility and heterogeneity, computational modeling could play an important role in providing glycosylated protein structure models. To address if glycan structures available in the PDB can be used as templates or fragments for glycan modeling, we present a survey of the N-glycan structures of 35 different sequences in the PDB. Our statistical analysis shows that the N-glycan structures found on homologous glycoproteins are significantly conserved compared to the random background, suggesting that N-glycan chains can be confidently modeled with template glycan structures whose parent glycoproteins share sequence similarity. On the other hand, N-glycan structures found on non-homologous glycoproteins do not show significant global structural similarity. Nonetheless, the internal substructures of these N-glycans, particularly, the substructures that are closer to the protein, show significantly similar structures, suggesting that such substructures can be used as fragments in glycan modeling. Increased interactions with protein might be responsible for the restricted conformational space of N-glycan chains. Our results suggest that structure prediction/modeling of N-glycans of glycoconjugates using structure database could be effective and different modeling approaches would be needed depending on the availability of template structures.
Restricted N-glycan Conformational Space in the PDB and Its Implication in Glycan Structure Modeling
Jo, Sunhwan; Lee, Hui Sun; Skolnick, Jeffrey; Im, Wonpil
2013-01-01
Understanding glycan structure and dynamics is central to understanding protein-carbohydrate recognition and its role in protein-protein interactions. Given the difficulties in obtaining the glycan's crystal structure in glycoconjugates due to its flexibility and heterogeneity, computational modeling could play an important role in providing glycosylated protein structure models. To address if glycan structures available in the PDB can be used as templates or fragments for glycan modeling, we present a survey of the N-glycan structures of 35 different sequences in the PDB. Our statistical analysis shows that the N-glycan structures found on homologous glycoproteins are significantly conserved compared to the random background, suggesting that N-glycan chains can be confidently modeled with template glycan structures whose parent glycoproteins share sequence similarity. On the other hand, N-glycan structures found on non-homologous glycoproteins do not show significant global structural similarity. Nonetheless, the internal substructures of these N-glycans, particularly, the substructures that are closer to the protein, show significantly similar structures, suggesting that such substructures can be used as fragments in glycan modeling. Increased interactions with protein might be responsible for the restricted conformational space of N-glycan chains. Our results suggest that structure prediction/modeling of N-glycans of glycoconjugates using structure database could be effective and different modeling approaches would be needed depending on the availability of template structures. PMID:23516343
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
Implications for New Physics from Fine-Tuning Arguments: II. Little Higgs Models
NASA Astrophysics Data System (ADS)
Casas, J. A.; Espinosa, J. R.; Hidalgo, I.
2005-03-01
We examine the fine-tuning associated to electroweak breaking in Little Higgs scenarios and find it to be always substantial and, generically, much higher than suggested by the rough estimates usually made. This is due to implicit tunings between parameters that can be overlooked at first glance but show up in a more systematic analysis. Focusing on four popular and representative Little Higgs scenarios, we find that the fine-tuning is essentially comparable to that of the Little Hierarchy problem of the Standard Model (which these scenarios attempt to solve) and higher than in supersymmetric models. This does not demonstrate that all Little Higgs models are fine-tuned, but stresses the need of a careful analysis of this issue in model-building before claiming that a particular model is not fine-tuned. In this respect we identify the main sources of potential fine-tuning that should be watched out for, in order to construct a successful Little Higgs model, which seems to be a non-trivial goal.
NASA Astrophysics Data System (ADS)
Everaers, Ralf
2012-08-01
We show that the front factor appearing in the shear modulus of a phantom network, Gph=(1-2/f)(ρkBT)/Ns, also controls the ratio of the strand length, Ns, and the number of monomers per Kuhn length of the primitive paths, NphPPKuhn, characterizing the average network conformation. In particular, NphPPKuhn=Ns/(1-2/f) and Gph=(ρkBT)/NphPPKuhn. Neglecting the difference between cross-links and slip-links, these results can be transferred to entangled systems and the interpretation of primitive path analysis data. In agreement with the tube model, the analogy to phantom networks suggest that the rheological entanglement length, Nerheo=(ρkBT)/Ge, should equal NePPKuhn. Assuming binary entanglements with f=4 functional junctions, we expect that Nerheo should be twice as large as the topological entanglement length, Netopo. These results are in good agreement with reported primitive path analysis results for model systems and a wide range of polymeric materials. Implications for tube and slip-link models are discussed.
Minervini, Giovanni; Panizzoni, Elisabetta; Giollo, Manuel; Masiero, Alessandro; Ferrari, Carlo; Tosatto, Silvio C. E.
2014-01-01
Von Hippel-Lindau (VHL) syndrome is a hereditary condition predisposing to the development of different cancer forms, related to germline inactivation of the homonymous tumor suppressor pVHL. The best characterized function of pVHL is the ubiquitination dependent degradation of Hypoxia Inducible Factor (HIF) via the proteasome. It is also involved in several cellular pathways acting as a molecular hub and interacting with more than 200 different proteins. Molecular details of pVHL plasticity remain in large part unknown. Here, we present a novel manually curated Petri Net (PN) model of the main pVHL functional pathways. The model was built using functional information derived from the literature. It includes all major pVHL functions and is able to credibly reproduce VHL syndrome at the molecular level. The reliability of the PN model also allowed in silico knockout experiments, driven by previous model analysis. Interestingly, PN analysis suggests that the variability of different VHL manifestations is correlated with the concomitant inactivation of different metabolic pathways. PMID:24886840
Minervini, Giovanni; Panizzoni, Elisabetta; Giollo, Manuel; Masiero, Alessandro; Ferrari, Carlo; Tosatto, Silvio C E
2014-01-01
Von Hippel-Lindau (VHL) syndrome is a hereditary condition predisposing to the development of different cancer forms, related to germline inactivation of the homonymous tumor suppressor pVHL. The best characterized function of pVHL is the ubiquitination dependent degradation of Hypoxia Inducible Factor (HIF) via the proteasome. It is also involved in several cellular pathways acting as a molecular hub and interacting with more than 200 different proteins. Molecular details of pVHL plasticity remain in large part unknown. Here, we present a novel manually curated Petri Net (PN) model of the main pVHL functional pathways. The model was built using functional information derived from the literature. It includes all major pVHL functions and is able to credibly reproduce VHL syndrome at the molecular level. The reliability of the PN model also allowed in silico knockout experiments, driven by previous model analysis. Interestingly, PN analysis suggests that the variability of different VHL manifestations is correlated with the concomitant inactivation of different metabolic pathways.
Hybrid modeling and empirical analysis of automobile supply chain network
NASA Astrophysics Data System (ADS)
Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying
2017-05-01
Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.
Reassessing hypoxia forecasts for the Gulf of Mexico.
Scavia, Donald; Donnelly, Kristina A
2007-12-01
Gulf of Mexico hypoxia has received considerable scientific and policy attention because of its potential ecological and economic impacts and implications for agriculture within its massive watershed. A 2000 assessment concluded that increased nitrate load to the Gulf since the 1950s was the primary cause of large-scale hypoxia areas. More recently, models have suggested that large-scale hypoxia did not start untilthe mid-1970s, and that a 40-45% nitrogen load reduction may be needed to reach the hypoxia area goal of the Hypoxia Action Plan. Recently, USGS revised nutrient load estimates to the Gulf, and the Action Plan reassessment has questioned the role of phosphorus versus nitrogen in controlling hypoxia. In this paper, we re-evaluate model simulations, hindcasts, and forecasts using revised nitrogen loads, and testthe ability of a phosphorus-driven version of the model to reproduce hypoxia trends. Our analysis suggests that, if phosphorus is limiting now, it became so because of relative increases in nitrogen loads during the 1970s and 1980s. While our model suggests nitrogen load reductions of 37-45% or phosphorus load reductions of 40-50% below the 1980-1996 average are needed, we caution that a phosphorus-only strategy is potentially dangerous, and suggest it would be prudent to reduce both.
Nonsyndromic cleft lip with or without cleft palate: New BCL3 information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amos, C.; Hecht, J.T.; Gasser, D.
1996-09-01
We did not previously provide LOD scores for linkage assuming heterogeneity, as suggested by Ott for the linkage analysis of cleft lip with or without cleft palate (CL/P) and BCL3, ApoC2, and D19S178 in the paper by Stein et al. The results from analysis using the HOMOG program, allowing for heterogeneity under the reduced penetrance model, gave a maximum LOD score of 1.85 for ApoC2, 0.41 for BCL3, 0.03 for D19S178, and 1.72 for multipoint analysis in the interval. For the affecteds-only model, the values are 1.96 for ApoC2, 0.41 for BCL3, 0.01 for D19S178, and 1.44 for the multipointmore » analysis. 8 refs.« less
A meta-analysis of aneurysm formation in laser assisted vascular anastomosis (LAVA)
NASA Astrophysics Data System (ADS)
Chen, Chen; Peng, Fei; Xu, Dahai; Cheng, Qinghua
2009-08-01
Laser assisted vascular anastomosis (LAVA) is looked as a particularly promising non-suture method in future. However, aneurysm formation is one of the main reasons delay the clinical application of LAVA. Some scientists investigated the incidence of aneurysms in animal model. To systematically analyze the literature on reported incidence of aneurysm formation in LAVA therapy, we performed a meta-analysis comparing LAVA with conventional suture anastomosis (CSA) in animal model. Data were systematically retrieved and selected from PUBMED. In total, 23 studies were retrieved. 18 studies were excluded, and 5 studies involving 647 animals were included. Analysis suggested no statistically significant difference between LAVA and CSA (OR 1.24, 95%CI 0.66-2.32, P=0.51). Result of meta analysis shows that the technology of LAVA is very close to clinical application.
NASA Astrophysics Data System (ADS)
Eom, Hyun-Jeong; Liu, Yuedan; Kwak, Gyu-Suk; Heo, Muyoung; Song, Kyung Seuk; Chung, Yun Doo; Chon, Tae-Soo; Choi, Jinhee
2017-06-01
We conducted an inhalation toxicity test on the alternative animal model, Drosophila melanogaster, to investigate potential hazards of indoor air pollution. The inhalation toxicity of toluene and formaldehyde was investigated using comprehensive transcriptomics and computational behavior analyses. The ingenuity pathway analysis (IPA) based on microarray data suggests the involvement of pathways related to immune response, stress response, and metabolism in formaldehyde and toluene exposure based on hub molecules. We conducted a toxicity test using mutants of the representative genes in these pathways to explore the toxicological consequences of alterations of these pathways. Furthermore, extensive computational behavior analysis showed that exposure to either toluene or formaldehyde reduced most of the behavioral parameters of both wild-type and mutants. Interestingly, behavioral alteration caused by toluene or formaldehyde exposure was most severe in the p38b mutant, suggesting that the defects in the p38 pathway underlie behavioral alteration. Overall, the results indicate that exposure to toluene and formaldehyde via inhalation causes severe toxicity in Drosophila, by inducing significant alterations in gene expression and behavior, suggesting that Drosophila can be used as a potential alternative model in inhalation toxicity screening.
Eom, Hyun-Jeong; Liu, Yuedan; Kwak, Gyu-Suk; Heo, Muyoung; Song, Kyung Seuk; Chung, Yun Doo; Chon, Tae-Soo; Choi, Jinhee
2017-01-01
We conducted an inhalation toxicity test on the alternative animal model, Drosophila melanogaster, to investigate potential hazards of indoor air pollution. The inhalation toxicity of toluene and formaldehyde was investigated using comprehensive transcriptomics and computational behavior analyses. The ingenuity pathway analysis (IPA) based on microarray data suggests the involvement of pathways related to immune response, stress response, and metabolism in formaldehyde and toluene exposure based on hub molecules. We conducted a toxicity test using mutants of the representative genes in these pathways to explore the toxicological consequences of alterations of these pathways. Furthermore, extensive computational behavior analysis showed that exposure to either toluene or formaldehyde reduced most of the behavioral parameters of both wild-type and mutants. Interestingly, behavioral alteration caused by toluene or formaldehyde exposure was most severe in the p38b mutant, suggesting that the defects in the p38 pathway underlie behavioral alteration. Overall, the results indicate that exposure to toluene and formaldehyde via inhalation causes severe toxicity in Drosophila, by inducing significant alterations in gene expression and behavior, suggesting that Drosophila can be used as a potential alternative model in inhalation toxicity screening. PMID:28621308
Gravitational Radiation Characteristics of Nonspinning Black-Hole Binaries
NASA Technical Reports Server (NTRS)
Kelly, B. J.; Baker, J. G.; Boggs, W. D.; Centrella, J. M.; vanMeter, J. R.; McWilliams, S. T.
2008-01-01
We present a detailed descriptive analysis of the gravitational radiation from binary mergers of non-spinning black holes, based on numerical relativity simulations of systems varying from equal-mass to a 6:1 mass ratio. Our analysis covers amplitude and phase characteristics of the radiation, suggesting a unified picture of the waveforms' dominant features in terms of an implicit rotating source, applying uniformly to the full wavetrain, from inspiral through ringdown. We construct a model of the late-stage frequency evolution that fits the l = m modes, and identify late-time relationships between waveform frequency and amplitude. These relationships allow us to construct a predictive model for the late-time waveforms, an alternative to the common practice of modelling by a sum of quasinormal mode overtones. We demonstrate an application of this in a new effective-one-body-based analytic waveform model.
Life cycle cost analysis of a stand-alone PV system in rural Kenya
NASA Astrophysics Data System (ADS)
Daly, Emma
The purpose of this quantitative research study was to determine the economic feasibility of a stand-alone PV system to electrify a rural area in Kenya. The research conducted involved a comprehensive review of all the relevant literature associated with the study. Methodologies were extrapolated from this extensive literature to develop a model for the complete design and economic analysis of a stand-alone PV system. A women's center in rural Kenya was used as a worked example to demonstrate the workings of the model. The results suggest that electrifying the center using a stand-alone PV system is an economically viable option which is encouraging for the surrounding area. This model can be used as a business model to determine the economic feasibility of a stand-alone PV system in alternative sites in Kenya.
Intelligence: Real or artificial?
Schlinger, Henry D.
1992-01-01
Throughout the history of the artificial intelligence movement, researchers have strived to create computers that could simulate general human intelligence. This paper argues that workers in artificial intelligence have failed to achieve this goal because they adopted the wrong model of human behavior and intelligence, namely a cognitive essentialist model with origins in the traditional philosophies of natural intelligence. An analysis of the word “intelligence” suggests that it originally referred to behavior-environment relations and not to inferred internal structures and processes. It is concluded that if workers in artificial intelligence are to succeed in their general goal, then they must design machines that are adaptive, that is, that can learn. Thus, artificial intelligence researchers must discard their essentialist model of natural intelligence and adopt a selectionist model instead. Such a strategic change should lead them to the science of behavior analysis. PMID:22477051
Optimized production planning model for a multi-plant cultivation system under uncertainty
NASA Astrophysics Data System (ADS)
Ke, Shunkui; Guo, Doudou; Niu, Qingliang; Huang, Danfeng
2015-02-01
An inexact multi-constraint programming model under uncertainty was developed by incorporating a production plan algorithm into the crop production optimization framework under the multi-plant collaborative cultivation system. In the production plan, orders from the customers are assigned to a suitable plant under the constraints of plant capabilities and uncertainty parameters to maximize profit and achieve customer satisfaction. The developed model and solution method were applied to a case study of a multi-plant collaborative cultivation system to verify its applicability. As determined in the case analysis involving different orders from customers, the period of plant production planning and the interval between orders can significantly affect system benefits. Through the analysis of uncertain parameters, reliable and practical decisions can be generated using the suggested model of a multi-plant collaborative cultivation system.
Trends in Mediation Analysis in Nursing Research: Improving Current Practice.
Hertzog, Melody
2018-06-01
The purpose of this study was to describe common approaches used by nursing researchers to test mediation models and evaluate them within the context of current methodological advances. MEDLINE was used to locate studies testing a mediation model and published from 2004 to 2015 in nursing journals. Design (experimental/correlation, cross-sectional/longitudinal, model complexity) and analysis (method, inclusion of test of mediated effect, violations/discussion of assumptions, sample size/power) characteristics were coded for 456 studies. General trends were identified using descriptive statistics. Consistent with findings of reviews in other disciplines, evidence was found that nursing researchers may not be aware of the strong assumptions and serious limitations of their analyses. Suggestions for strengthening the rigor of such studies and an overview of current methods for testing more complex models, including longitudinal mediation processes, are presented.
Brousselle, Astrid; Lamothe, Lise; Mercier, Céline; Perreault, Michel
2007-02-01
The co-occurrence of mental health and substance use disorders is becoming increasingly recognized as a single problem, and professionals recognize that both should be addressed at the same time. Medical best practices recommend integrated treatment. However, criticisms have arisen, particularly concerning the difficulty of implementing integrated teams in specific health-care contexts and the appropriateness of the proposed model for certain populations. Using logic analysis, we identify the key clinical and organizational factors that contribute to successful implementation. Building on both the professional and organizational literatures on integrated services, we propose a conceptual model that makes it possible to analyze integration processes and places integrated treatment within an interpretative framework. Using this model, it becomes possible to identify key factors necessary to support service integration, and suggest new models of practice adapted to particular contexts.
Analysis of obsidian from moho cay, belize: new evidence on classic maya trade routes.
Healy, P F; McKillop, H I; Walsh, B
1984-07-27
Trace element analysis of obsidian artifacts from Moho Cay, Belize, reveals that the obsidian derives primarily from the El Chayal outcrop in highland Guatemala and not from the Ixtepeque source. This is contrary to the widely accepted obsidian trade route model for Classic Maya civilization and suggests that Classic Maya obsidian trade was a more complex economic phenomenon than has been recognized.
ERIC Educational Resources Information Center
Suen, Hoi K.; And Others
The applicability is explored of the Bayesian random-effect analysis of variance (ANOVA) model developed by G. C. Tiao and W. Y. Tan (1966) and a method suggested by H. K. Suen and P. S. Lee (1987) for the generalizability analysis of autocorrelated data. According to Tiao and Tan, if time series data could be described as a first-order…
Nonlinear behavior of solar gravity modes driven by He-3 in the core. I - Bifurcation analysis
NASA Technical Reports Server (NTRS)
Merryfield, William J.; Gough, Douglas; Toomre, Juri
1990-01-01
The nonlinear development of solar gravity modes driven by He-3 burning in the solar core is investigated by means of an idealized dynamical model. Possible outcomes that have been suggested in the literature include the triggering of subcritical direct convection, leading to core mixing, and the saturation of the excitation processes, leading to sustained finite-amplitude oscillations. The present simple model suggests that the latter is the more likely. The limiting amplitude of the oscillations is estimated, ignoring possible resonances with other gravity modes, to be of order 10 km/s at the solar surface. Such oscillations would be easily observable. That large-amplitude gravity modes have not been observed suggests either that these modes are not unstable in the present era or that they are limited to much smaller amplitudes by resonant coupling.
Shah, Anup D; Inder, Kerry L; Shah, Alok K; Cristino, Alexandre S; McKie, Arthur B; Gabra, Hani; Davis, Melissa J; Hill, Michelle M
2016-10-07
Lipid rafts are dynamic membrane microdomains that orchestrate molecular interactions and are implicated in cancer development. To understand the functions of lipid rafts in cancer, we performed an integrated analysis of quantitative lipid raft proteomics data sets modeling progression in breast cancer, melanoma, and renal cell carcinoma. This analysis revealed that cancer development is associated with increased membrane raft-cytoskeleton interactions, with ∼40% of elevated lipid raft proteins being cytoskeletal components. Previous studies suggest a potential functional role for the raft-cytoskeleton in the action of the putative tumor suppressors PTRF/Cavin-1 and Merlin. To extend the observation, we examined lipid raft proteome modulation by an unrelated tumor suppressor opioid binding protein cell-adhesion molecule (OPCML) in ovarian cancer SKOV3 cells. In agreement with the other model systems, quantitative proteomics revealed that 39% of OPCML-depleted lipid raft proteins are cytoskeletal components, with microfilaments and intermediate filaments specifically down-regulated. Furthermore, protein-protein interaction network and simulation analysis showed significantly higher interactions among cancer raft proteins compared with general human raft proteins. Collectively, these results suggest increased cytoskeleton-mediated stabilization of lipid raft domains with greater molecular interactions as a common, functional, and reversible feature of cancer cells.
Mechanistic modelling of drug release from a polymer matrix using magnetic resonance microimaging.
Kaunisto, Erik; Tajarobi, Farhad; Abrahmsen-Alami, Susanna; Larsson, Anette; Nilsson, Bernt; Axelsson, Anders
2013-03-12
In this paper a new model describing drug release from a polymer matrix tablet is presented. The utilization of the model is described as a two step process where, initially, polymer parameters are obtained from a previously published pure polymer dissolution model. The results are then combined with drug parameters obtained from literature data in the new model to predict solvent and drug concentration profiles and polymer and drug release profiles. The modelling approach was applied to the case of a HPMC matrix highly loaded with mannitol (model drug). The results showed that the drug release rate can be successfully predicted, using the suggested modelling approach. However, the model was not able to accurately predict the polymer release profile, possibly due to the sparse amount of usable pure polymer dissolution data. In addition to the case study, a sensitivity analysis of model parameters relevant to drug release was performed. The analysis revealed important information that can be useful in the drug formulation process. Copyright © 2013 Elsevier B.V. All rights reserved.
An anatomy of the projected North Atlantic warming hole in CMIP5 models
NASA Astrophysics Data System (ADS)
Menary, Matthew B.; Wood, Richard A.
2018-04-01
Global mean surface air temperature has increased over the past century and climate models project this trend to continue. However, the pattern of change is not homogeneous. Of particular interest is the subpolar North Atlantic, which has cooled in recent years and is projected to continue to warm less rapidly than the global mean. This is often termed the North Atlantic warming hole (WH). In climate model projections, the development of the WH is concomitant with a weakening of the Atlantic meridional overturning circulation (AMOC). Here, we further investigate the possible link between the AMOC and WH and the competing drivers of vertical mixing and surface heat fluxes. Across a large ensemble of 41 climate models we find that the spatial structure of the WH varies considerably from model to model but is generally upstream of the simulated deep water formation regions. A heat budget analysis suggests the formation of the WH is related to changes in ocean heat transport. Although the models display a plethora of AMOC mean states, they generally predict a weakening and shallowing of the AMOC also consistent with the evolving depth structure of the WH. A lagged regression analysis during the WH onset phase suggests that reductions in wintertime mixing lead a weakening of the AMOC by 5 years in turn leading initiation of the WH by 5 years. Inter-model differences in the evolution and structure of the WH are likely to lead to somewhat different projected climate impacts in nearby Europe and North America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tselioudis, George
2016-03-04
From its location on the subtropics-midlatitude boundary, the Azores is influenced by both the subtropical high pressure and the midlatitude baroclinic storm regimes, and therefore experiences a wide range of cloud structures, from fair-weather scenes to stratocumulus sheets to deep convective systems. This project combined three types of data sets to study cloud variability in the Azores: a satellite analysis of cloud regimes, a reanalysis characterization of storminess, and a 19-month field campaign that occurred on Graciosa Island. Combined analysis of the three data sets provides a detailed picture of cloud variability and the respective dynamic influences, with emphasis onmore » low clouds that constitute a major uncertainty source in climate model simulations. The satellite cloud regime analysis shows that the Azores cloud distribution is similar to the mean global distribution and can therefore be used to evaluate cloud simulation in global models. Regime analysis of low clouds shows that stratocumulus decks occur under the influence of the Azores high-pressure system, while shallow cumulus clouds are sustained by cold-air outbreaks, as revealed by their preference for post-frontal environments and northwesterly flows. An evaluation of CMIP5 climate model cloud regimes over the Azores shows that all models severely underpredict shallow cumulus clouds, while most models also underpredict the occurrence of stratocumulus cloud decks. It is demonstrated that carefully selected case studies can be related through regime analysis to climatological cloud distributions, and a methodology is suggested utilizing process-resolving model simulations of individual cases to better understand cloud-dynamics interactions and attempt to explain and correct climate model cloud deficiencies.« less
ERIC Educational Resources Information Center
Li, Wenjing; Denson, Linley A.; Dorstyn, Diana S.
2017-01-01
This study investigated help-seeking intentions and use of mental health services within a sample of 1128 Mainland Chinese college students (630 males and 498 females; mean age = 20.01 years, SD = 1.48). Results of structural equation modeling and logistic regression analysis suggested that social-cognitive variables had significant effects both…
A model for amalgamation in group decision making
NASA Technical Reports Server (NTRS)
Cutello, Vincenzo; Montero, Javier
1992-01-01
In this paper we present a generalization of the model proposed by Montero, by allowing non-complete fuzzy binary relations for individuals. A degree of unsatisfaction can be defined in this case, suggesting that any democratic aggregation rule should take into account not only ethical conditions or some degree of rationality in the amalgamating procedure, but also a minimum support for the set of alternatives subject to the group analysis.
Spectral characteristics of background error covariance and multiscale data assimilation
Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
Herath, Mahesha B; Creager, Stephen E; Kitaygorodskiy, Alex; DesMarteau, Darryl D
2010-09-10
A study of proton-transport rates and mechanisms under anhydrous conditions using a series of acid model compounds, analogous to comb-branch perfluorinated ionomers functionalized with phosphonic, phosphinic, sulfonic, and carboxylic acid protogenic groups, is reported. Model compounds are characterized with respect to proton conductivity, viscosity, proton, and anion (conjugate base) self-diffusion coefficients, and Hammett acidity. The highest conductivities, and also the highest viscosities, are observed for the phosphonic and phosphinic acid model compounds. Arrhenius analysis of conductivity and viscosity for these two acids reveals much lower activation energies for ion transport than for viscous flow. Additionally, the proton self-diffusion coefficients are much higher than the conjugate-base self-diffusion coefficients for these two acids. Taken together, these data suggest that anhydrous proton transport in the phosphonic and phosphinic acid model compounds occurs primarily by a structure-diffusion, hopping-based mechanism rather than a vehicle mechanism. Further analysis of ionic conductivity and ion self-diffusion rates by using the Nernst-Einstein equation reveals that the phosphonic and phosphinic acid model compounds are relatively highly dissociated even under anhydrous conditions. In contrast, sulfonic and carboxylic acid-based systems exhibit relatively low degrees of dissociation under anhydrous conditions. These findings suggest that fluoroalkyl phosphonic and phosphinic acids are good candidates for further development as anhydrous, high-temperature proton conductors.
Penn, Alexandra S.; Knight, Christopher J. K.; Lloyd, David J. B.; Avitabile, Daniele; Kok, Kasper; Schiller, Frank; Woodward, Amy; Druckman, Angela; Basson, Lauren
2013-01-01
Fuzzy Cognitive Mapping (FCM) is a widely used participatory modelling methodology in which stakeholders collaboratively develop a ‘cognitive map’ (a weighted, directed graph), representing the perceived causal structure of their system. This can be directly transformed by a workshop facilitator into simple mathematical models to be interrogated by participants by the end of the session. Such simple models provide thinking tools which can be used for discussion and exploration of complex issues, as well as sense checking the implications of suggested causal links. They increase stakeholder motivation and understanding of whole systems approaches, but cannot be separated from an intersubjective participatory context. Standard FCM methodologies make simplifying assumptions, which may strongly influence results, presenting particular challenges and opportunities. We report on a participatory process, involving local companies and organisations, focussing on the development of a bio-based economy in the Humber region. The initial cognitive map generated consisted of factors considered key for the development of the regional bio-based economy and their directional, weighted, causal interconnections. A verification and scenario generation procedure, to check the structure of the map and suggest modifications, was carried out with a second session. Participants agreed on updates to the original map and described two alternate potential causal structures. In a novel analysis all map structures were tested using two standard methodologies usually used independently: linear and sigmoidal FCMs, demonstrating some significantly different results alongside some broad similarities. We suggest a development of FCM methodology involving a sensitivity analysis with different mappings and discuss the use of this technique in the context of our case study. Using the results and analysis of our process, we discuss the limitations and benefits of the FCM methodology in this case and in general. We conclude by proposing an extended FCM methodology, including multiple functional mappings within one participant-constructed graph. PMID:24244303
Symbolic healing of early psychosis: psychoeducation and sociocultural processes of recovery.
Larsen, John Aggergaard
2007-09-01
This article analyzes sociocultural processes of recovery in a Danish mental health service providing two years of integrated biopsychosocial treatment following first-episode psychosis. The study is based on ethnographic research in the service and person-centered involvement with 15 clients. The analysis applies Dow's [1986 American Anthropologist 88:56-69] model of universal components of symbolic healing to elucidate sociocultural aspects of therapeutic efficacy that are alternatively disregarded as placebo or nonspecific effects. It is demonstrated how staff engaged with clients to deliver "psychoeducation" that provided scientific and biomedical theories about mental illness, constituting a shared "mythic world" that was accepted as an experiential truth and used to explain clients' illness experiences. The analysis highlights the need to supplement attention in Dow's model to the healing procedure with consideration of variability in the healing process. Depending on individual responses to the intervention, the staff's professional backgrounds and staff-client relationships different recovery models were applied. One suggested "episodic psychosis" and full recovery, and the other suggested "chronic schizophrenia" and the necessity of comprehensive life adjustments to the mental illness. The recovery models influenced clients' perspectives on illness and self as they engaged in identity work, negotiating future plans and individual life projects by including also alternative systems of explanation from the wider cultural repertoire.
Íbias, Javier; Pellón, Ricardo; Sanabria, Federico
2014-01-01
Recent research has suggested that frequent short bursts of activity characterize hyperactivity associated with attention deficit hyperactivity disorder (ADHD). This study determined whether such pattern is also visible in schedule-induced polydipsia (SIP) in the spontaneously hypertensive rat (SHR), an animal model of ADHD. Male SHR, Wistar Kyoto (WKY) and Wistar rats were exposed to 40 sessions of SIP using a multiple fixed-time (FT) schedule of food delivery with FT 30-s and FT 90-s components. Stable performance was analysed to determine the extent to which SIP-associated drinking is organized in bouts. The Bi-Exponential Refractory Model (BERM) of free-operant performance was applied to schedule-induced licks. A model comparison analysis supported BERM as a description of SIP episodes: licks were not produced at a constant rate but organized into bouts within drinking episodes. FT 30-s induced similar overall licking rates, latencies to first licks and episode durations across strains; FT 90-s induced longer episode durations in SHRs and reduced licking rate in WKY and Wistar rats to nearly baseline levels. Across schedules, SHRs made more and shorter bouts when compared to the other strains. These results suggest an incentive-induced hyperactivity in SHR that has been observed in operant behavior and in children with ADHD. PMID:25447297
Sullivan, Kristynn J; Shadish, William R; Steiner, Peter M
2015-03-01
Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs. (c) 2015 APA, all rights reserved).
Prasad, Kumar Suranjit; Amin, Yesha; Selvaraj, Kaliaperumal
2014-07-15
The present study reports a novel approach for synthesis of Zr nanoparticles using aqueous extract of Aloe vera. Resulting nanoparticles were embedded into chitosan biopolymer and termed as CNZr composite. The composite was subjected to detailed adsorption studies for removal of fluoride from aqueous solution. The synthesized Zr nanoparticles showed UV-vis absorption peak at 420nm. TEM result showed the formation of polydispersed, nanoparticles ranging from 18nm to 42nm. SAED and XRD analysis suggested an fcc (face centered cubic) Zr crystallites. EDAX analysis suggested that Zr was an integral component of synthesized nanoparticles. FT-IR study indicated that functional group like NH, CO, CN and CC were involved in particle formation. The adsorption of fluoride on to CNZr composite worked well at pH 7.0, where ∼99% of fluoride was found to be adsorbed on adsorbent. Langmuir isotherm model best fitted the equilibrium data since it presented higher R(2) value than Freundlich model. In comparison to pseudo-first order kinetic model, the pseudo-second order model could explain adsorption kinetic behavior of F(-) onto CNZr composite satisfactorily with a good correlation coefficient. The present study revealed that CNZr composite may work as an effective tool for removal of fluoride from contaminated water. Copyright © 2014 Elsevier B.V. All rights reserved.
TAD-free analysis of architectural proteins and insulators.
Mourad, Raphaël; Cuvier, Olivier
2018-03-16
The three-dimensional (3D) organization of the genome is intimately related to numerous key biological functions including gene expression and DNA replication regulations. The mechanisms by which molecular drivers functionally organize the 3D genome, such as topologically associating domains (TADs), remain to be explored. Current approaches consist in assessing the enrichments or influences of proteins at TAD borders. Here, we propose a TAD-free model to directly estimate the blocking effects of architectural proteins, insulators and DNA motifs on long-range contacts, making the model intuitive and biologically meaningful. In addition, the model allows analyzing the whole Hi-C information content (2D information) instead of only focusing on TAD borders (1D information). The model outperforms multiple logistic regression at TAD borders in terms of parameter estimation accuracy and is validated by enhancer-blocking assays. In Drosophila, the results support the insulating role of simple sequence repeats and suggest that the blocking effects depend on the number of repeats. Motif analysis uncovered the roles of the transcriptional factors pannier and tramtrack in blocking long-range contacts. In human, the results suggest that the blocking effects of the well-known architectural proteins CTCF, cohesin and ZNF143 depend on the distance between loci, where each protein may participate at different scales of the 3D chromatin organization.
Thermal Analysis of the PediaFlow pediatric ventricular assist device.
Gardiner, Jeffrey M; Wu, Jingchun; Noh, Myounggyu D; Antaki, James F; Snyder, Trevor A; Paden, David B; Paden, Brad E
2007-01-01
Accurate modeling of heat dissipation in pediatric intracorporeal devices is crucial in avoiding tissue and blood thermotrauma. Thermal models of new Maglev ventricular assist device (VAD) concepts for the PediaFlow VAD are developed by incorporating empirical heat transfer equations with thermal finite element analysis (FEA). The models assume three main sources of waste heat generation: copper motor windings, active magnetic thrust bearing windings, and eddy currents generated within the titanium housing due to the two-pole motor. Waste heat leaves the pump by convection into blood passing through the pump and conduction through surrounding tissue. Coefficients of convection are calculated and assigned locally along fluid path surfaces of the three-dimensional pump housing model. FEA thermal analysis yields a three-dimensional temperature distribution for each of the three candidate pump models. Thermal impedances from the motor and thrust bearing windings to tissue and blood contacting surfaces are estimated based on maximum temperature rise at respective surfaces. A new updated model for the chosen pump topology is created incorporating computational fluid dynamics with empirical fluid and heat transfer equations. This model represents the final geometry of the first generation prototype, incorporates eddy current heating, and has 60 discrete convection regions. Thermal analysis is performed at nominal and maximum flow rates, and temperature distributions are plotted. Results suggest that the pump will not exceed a temperature rise of 2 degrees C during normal operation.
Segregation analysis of cryptogenic epilepsy and an empirical test of the validity of the results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ottman, R.; Hauser, W.A.; Barker-Cummings, C.
1997-03-01
We used POINTER to perform segregation analysis of crytogenic epilepsy in 1,557 three-generation families (probands and their parents, siblings, and offspring) ascertained from voluntary organizations. Analysis of the full data set indicated that the data were most consistent with an autosomal dominant (AD) model with 61% penetrance of the susceptibility gene. However, subsequent analyses revealed that the patterns of familial aggregation differed markedly between siblings and offspring of the probands. Risks in siblings were consistent with an autosomal recessive (AR) model and inconsistent with an AD model, whereas risks in offspring were inconsistent with an AR model and more consistentmore » with an AD model. As a further test of the validity of the AD model, we used sequential ascertainment to extend the family history information in the subset of families judged likely to carry the putative susceptibility gene because they contained at least three affected individuals. Prevalence of idiopathic/cryptogenic epilepsy was only 3.7% in newly identified relatives expected to have a 50% probability of carrying the susceptibility gene under an AD model. Approximately 30% (i.e., 50% X 61%) were expected to be affected under the AD model resulting from the segregation analysis. These results suggest that the familial distribution of cryptogenic epilepsy is inconsistent with any conventional genetic model. The differences between siblings and offspring in the patterns of familial risk are intriguing and should be investigated further. 28 refs., 6 tabs.« less
Troughs in Ice Sheets and Other Icy Deposits on Mars: Analysis of Their Radiative Balance
NASA Technical Reports Server (NTRS)
Fountain, A.; Kargel, J.; Lewis, K.; MacAyeal, D.; Pfeffer, T.; Zwally, H. J.
2000-01-01
It has long been known that groove-like structures in glaciers and ice sheets can trap more incoming solar radiation than is the case for a 'normal' flat, smooth surface. In this presentation, we shall describe the radiative regimes of typical scarps and troughs on icy surfaces of Mars, and suggest how these features originate and evolve through time. The basis of our analysis is the radiation balance model presented by Pfeffer and Bretherton. Their model considers the visible band radiation regime of a V-shaped groove on a terrestrial ice surface, and shows that absorbed energy can be enhanced by up to 50 percent for grooves with small opening angles and with typical polar values of the solar zenith angle. Our work extends this model by considering: (a) departures from V-shaped geometry, (b) both englacial and surficial dust and debris, and (c) the infrared spectrum. We apply the extended model to various features on the Martian surface, including the spiral-like scarps on the Northern and Southern ice sheets, the large-scale chasms (e.g., Chasm Borealis), and groove-like lineations on valley floors thought to be filled with mixtures of dust and icy substances. In conjunction with study of valley-closure experiments, we suggest that spiral-like scarps and chasms are stable features of the Martian climate regime. We also suggest that further study of scarps and chasms may shed light on the composition (i.e., relative proportions of water ice, carbon-dioxide ice and dust) of the Martian ice sheets and valley fills.
He, Meirong; Shu, Jingcheng; Huang, Xing; Tang, Hui
2015-02-01
Genetic factors are important in the pathogenesis of Premature ovarian failure (POF). Notably, estrogen receptor-a (ESR1) has been suggested as a possible candidate gene for POF; however, published studies of ESR1 gene polymorphisms have been hampered by small sample sizes and inconclusive or ambiguous results. The aim of this meta analysis is to investigate the associations between two novel common ESR1 polymorphisms (intron 1 polymorphisms PvuII-rs2234693: T.C and XbaI-rs9340799: A.G) and POF. A comprehensive search was conducted to identify all studies on the association of ESR1 gene polymorphisms with POF up to August 2014. Pooled odds ratio (OR) and corresponding 95 % confidence interval (CI) were calculated using fixed-or random-effects model in the meta-analysis. Three studies covering 1396 subjects were identified. Pooled data showed significant association between ESR1 gene PvuII polymorphism and risk of POF: [allele model: Cvs. T, OR = 0.735, 95%CI: 0.624 ~ 0.865, p = 0.001; co-dominant models: CCvs.TT, OR = 0.540, 95%CI: 0.382 ~ 0.764, p = 0.001, CTvs.TT, OR = 0.735, 95%CI: 0.555 ~ 0.972, p = 0.031; dominant model: CT + CCvs.TT, OR = 0.618, 95%CI: 0.396 ~ 0.966, p = 0.035; recessive model: CCvs.TT + CT, OR = 0.659, 95%CI: 0.502 ~ 0.864, p = 0.003]. Subgroup analyses showed a significant association in all models in Asian population, but no significant association in any model in European population. For the XbaI polymorphism, overall, no significant association was observed under any genetic models. However, under dominant model, ESR1 gene XbaI polymorphism is significantly association with risk of POF in Asian population. The present meta-analysis suggests that ESR1gene PvuII polymorphism is significantly associated with an increased risk of POF. And ESR1gene XbaI polymorphism is not association with risk of POF overall. However, under dominant model, ESR1gene XbaI polymorphism is significantly association with risk of POF in Asian population. Further large and well-designed studies are needed to confirm the association.
Plioutsias, Anastasios; Karanikas, Nektarios; Chatzimihailidou, Maria Mikela
2018-03-01
Currently, published risk analyses for drones refer mainly to commercial systems, use data from civil aviation, and are based on probabilistic approaches without suggesting an inclusive list of hazards and respective requirements. Within this context, this article presents: (1) a set of safety requirements generated from the application of the systems theoretic process analysis (STPA) technique on a generic small drone system; (2) a gap analysis between the set of safety requirements and the ones met by 19 popular drone models; (3) the extent of the differences between those models, their manufacturers, and the countries of origin; and (4) the association of drone prices with the extent they meet the requirements derived by STPA. The application of STPA resulted in 70 safety requirements distributed across the authority, manufacturer, end user, or drone automation levels. A gap analysis showed high dissimilarities regarding the extent to which the 19 drones meet the same safety requirements. Statistical results suggested a positive correlation between drone prices and the extent that the 19 drones studied herein met the safety requirements generated by STPA, and significant differences were identified among the manufacturers. This work complements the existing risk assessment frameworks for small drones, and contributes to the establishment of a commonly endorsed international risk analysis framework. Such a framework will support the development of a holistic and methodologically justified standardization scheme for small drone flights. © 2017 Society for Risk Analysis.
Adaptive Immunity Restricts Replication of Novel Murine Astroviruses
Yokoyama, Christine C.; Loh, Joy; Zhao, Guoyan; Stappenbeck, Thaddeus S.; Wang, David; Huang, Henry V.
2012-01-01
The mechanisms of astrovirus pathogenesis are largely unknown, in part due to a lack of a small-animal model of disease. Using shotgun sequencing and a custom analysis pipeline, we identified two novel astroviruses capable of infecting research mice, murine astrovirus (MuAstV) STL1 and STL2. Subsequent analysis revealed the presence of at least two additional viruses (MuAstV STL3 and STL4), suggestive of a diverse population of murine astroviruses in research mice. Complete genomic characterization and subsequent phylogenetic analysis showed that MuAstV STL1 to STL4 are members of the mamastrovirus genus and are likely members of a new mamastrovirus genogroup. Using Rag1−/− mice deficient in B and T cells, we demonstrate that adaptive immunity is required to control MuAstV infection. Furthermore, using Stat1−/− mice deficient in innate signaling, we demonstrate a role for the innate immune response in the control of MuAstV replication. Our results demonstrate that MuAstV STL permits the study of the mechanisms of astrovirus infection and host-pathogen interactions in a genetically manipulable small-animal model. Finally, we detected MuAstV in commercially available mice, suggesting that these viruses may be present in academic and commercial research mouse facilities, with possible implications for interpretation of data generated in current mouse models of disease. PMID:22951832
Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains
Meyer, Denny; Forbes, Don; Clarke, Stephen R.
2006-01-01
Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key Points A comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition. The Markov assumption appears to be valid. However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play. Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes. PMID:24357946
Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains.
Meyer, Denny; Forbes, Don; Clarke, Stephen R
2006-01-01
Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key PointsA comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition.The Markov assumption appears to be valid.However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play.Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes.
On the dimensionality of the stress-related growth scale: one, three, or seven factors?
Roesch, Scott C; Rowley, Anthony A; Vaughn, Allison A
2004-06-01
We examined the factorial validity and dimensionality of the Stress-Related Growth Scale (SRGS; Park, Cohen, & Murch, 1996) using a large multiethnic sample (n = 1,070). Exploratory and confirmatory factor analyses suggested that a multidimensional representation of the SRGS fit better than a unidimensional representation. Specifically, we cross-validated both a 3-factor model and a 7-factor model using confirmatory factor analysis and were shown to be invariant across gender and ethnic groups. The 3-factor model was represented by global dimensions of growth that included rational/mature thinking, affective/emotional growth, and religious/spiritual growth. We replicated the 7-factor model of Armeli, Gunthert, and Cohen (2001) and it represented more specific components of growth such as Self-Understanding and Treatment of Others. However, some factors of the 7-factor model had questionable internal consistency and were strongly intercorrelated, suggesting redundancy. The findings support the notion that the factor structure of both the original 1-factor and revised 7-factor models are unstable and that the 3-factor model developed in this research has more reliable psychometric properties and structure.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Skog, Alexander; Peyre, Sarah E; Pozner, Charles N; Thorndike, Mary; Hicks, Gloria; Dellaripa, Paul F
2012-01-01
The situational leadership model suggests that an effective leader adapts leadership style depending on the followers' level of competency. We assessed the applicability and reliability of the situational leadership model when observing residents in simulated hospital floor-based scenarios. Resident teams engaged in clinical simulated scenarios. Video recordings were divided into clips based on Emergency Severity Index v4 acuity scores. Situational leadership styles were identified in clips by two physicians. Interrater reliability was determined through descriptive statistical data analysis. There were 114 participants recorded in 20 sessions, and 109 clips were reviewed and scored. There was a high level of interrater reliability (weighted kappa r = .81) supporting situational leadership model's applicability to medical teams. A suggestive correlation was found between frequency of changes in leadership style and the ability to effectively lead a medical team. The situational leadership model represents a unique tool to assess medical leadership performance in the context of acuity changes.
Dynamics modeling and loads analysis of an offshore floating wind turbine
NASA Astrophysics Data System (ADS)
Jonkman, Jason Mark
The vast deepwater wind resource represents a potential to use offshore floating wind turbines to power much of the world with renewable energy. Many floating wind turbine concepts have been proposed, but dynamics models, which account for the wind inflow, aerodynamics, elasticity, and controls of the wind turbine, along with the incident waves, sea current, hydrodynamics, and platform and mooring dynamics of the floater, were needed to determine their technical and economic feasibility. This work presents the development of a comprehensive simulation tool for modeling the coupled dynamic response of offshore floating wind turbines, the verification of the simulation tool through model-to-model comparisons, and the application of the simulation tool to an integrated loads analysis for one of the promising system concepts. A fully coupled aero-hydro-servo-elastic simulation tool was developed with enough sophistication to address the limitations of previous frequency- and time-domain studies and to have the features required to perform loads analyses for a variety of wind turbine, support platform, and mooring system configurations. The simulation capability was tested using model-to-model comparisons. The favorable results of all of the verification exercises provided confidence to perform more thorough analyses. The simulation tool was then applied in a preliminary loads analysis of a wind turbine supported by a barge with catenary moorings. A barge platform was chosen because of its simplicity in design, fabrication, and installation. The loads analysis aimed to characterize the dynamic response and to identify potential loads and instabilities resulting from the dynamic couplings between the turbine and the floating barge in the presence of combined wind and wave excitation. The coupling between the wind turbine response and the barge-pitch motion, in particular, produced larger extreme loads in the floating turbine than experienced by an equivalent land-based turbine. Instabilities were also found in the system. The influence of conventional wind turbine blade-pitch control actions on the pitch damping of the floating turbine was also assessed. Design modifications for reducing the platform motions, improving the turbine response, and eliminating the instabilities are suggested. These suggestions are aimed at obtaining cost-effective designs that achieve favorable performance while maintaining structural integrity.
Lamont, Andrea E.; Vermunt, Jeroen K.; Van Horn, M. Lee
2016-01-01
Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we test the effects of violating an implicit assumption often made in these models – i.e., independent variables in the model are not directly related to latent classes. Results indicated that the major risk of failing to model the relationship between predictor and latent class was an increase in the probability of selecting additional latent classes and biased class proportions. Additionally, this study tests whether regression mixture models can detect a piecewise relationship between a predictor and outcome. Results suggest that these models are able to detect piecewise relations, but only when the relationship between the latent class and the predictor is included in model estimation. We illustrate the implications of making this assumption through a re-analysis of applied data examining heterogeneity in the effects of family resources on academic achievement. We compare previous results (which assumed no relation between independent variables and latent class) to the model where this assumption is lifted. Implications and analytic suggestions for conducting regression mixture based on these findings are noted. PMID:26881956
Margolin, Adam A.; Bilal, Erhan; Huang, Erich; Norman, Thea C.; Ottestad, Lars; Mecham, Brigham H.; Sauerwine, Ben; Kellen, Michael R.; Mangravite, Lara M.; Furia, Matthew D.; Vollan, Hans Kristian Moen; Rueda, Oscar M.; Guinney, Justin; Deflaux, Nicole A.; Hoff, Bruce; Schildwachter, Xavier; Russnes, Hege G.; Park, Daehoon; Vang, Veronica O.; Pirtle, Tyler; Youseff, Lamia; Citro, Craig; Curtis, Christina; Kristensen, Vessela N.; Hellerstein, Joseph; Friend, Stephen H.; Stolovitzky, Gustavo; Aparicio, Samuel; Caldas, Carlos; Børresen-Dale, Anne-Lise
2013-01-01
Although molecular prognostics in breast cancer are among the most successful examples of translating genomic analysis to clinical applications, optimal approaches to breast cancer clinical risk prediction remain controversial. The Sage Bionetworks–DREAM Breast Cancer Prognosis Challenge (BCC) is a crowdsourced research study for breast cancer prognostic modeling using genome-scale data. The BCC provided a community of data analysts with a common platform for data access and blinded evaluation of model accuracy in predicting breast cancer survival on the basis of gene expression data, copy number data, and clinical covariates. This approach offered the opportunity to assess whether a crowdsourced community Challenge would generate models of breast cancer prognosis commensurate with or exceeding current best-in-class approaches. The BCC comprised multiple rounds of blinded evaluations on held-out portions of data on 1981 patients, resulting in more than 1400 models submitted as open source code. Participants then retrained their models on the full data set of 1981 samples and submitted up to five models for validation in a newly generated data set of 184 breast cancer patients. Analysis of the BCC results suggests that the best-performing modeling strategy outperformed previously reported methods in blinded evaluations; model performance was consistent across several independent evaluations; and aggregating community-developed models achieved performance on par with the best-performing individual models. PMID:23596205
Donaldson, Gary W; Chapman, C Richard; Nakamura, Yoshi; Bradshaw, David H; Jacobson, Robert C; Chapman, Christopher N
2003-03-01
The defense response theory implies that individuals should respond to increasing levels of painful stimulation with correlated increases in affectively mediated psychophysiological responses. This paper employs structural equation modeling to infer the latent processes responsible for correlated growth in the pain report, evoked potential amplitudes, pupil dilation, and skin conductance of 92 normal volunteers who experienced 144 trials of three levels of increasingly painful electrical stimulation. The analysis assumed a two-level model of latent growth as a function of stimulus level. The first level of analysis formulated a nonlinear growth model for each response measure, and allowed intercorrelations among the parameters of these models across individuals. The second level of analysis posited latent process factors to account for these intercorrelations. The best-fitting parsimonious model suggests that two latent processes account for the correlations. One of these latent factors, the activation threshold, determines the initial threshold response, while the other, the response gradient, indicates the magnitude of the coherent increase in response with stimulus level. Collectively, these two second-order factors define the defense response, a broad construct comprising both subjective pain evaluation and physiological mechanisms.
Wang, Chi -Jen; Liu, Da -Jiang; Evans, James W.
2015-04-28
Threshold versions of Schloegl’s model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique valuemore » but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. As a result, mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.« less
NASA Astrophysics Data System (ADS)
Wang, Chi-Jen; Liu, Da-Jiang; Evans, James W.
2015-04-01
Threshold versions of Schloegl's model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. Mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.
The Human-Computer Interface and Information Literacy: Some Basics and Beyond.
ERIC Educational Resources Information Center
Church, Gary M.
1999-01-01
Discusses human/computer interaction research, human/computer interface, and their relationships to information literacy. Highlights include communication models; cognitive perspectives; task analysis; theory of action; problem solving; instructional design considerations; and a suggestion that human/information interface may be a more appropriate…
Interfaces for End-User Information Seeking.
ERIC Educational Resources Information Center
Marchionini, Gary
1992-01-01
Discusses essential features of interfaces to support end-user information seeking. Highlights include cognitive engineering; task models and task analysis; the problem-solving nature of information seeking; examples of systems for end-users, including online public access catalogs (OPACs), hypertext, and help systems; and suggested research…
Climate Science: Tropical Expansion by Ocean Swing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Jian
The tropical belt has become wider over the past decades, but climate models fall short of capturing the full rate of the expansion. The latest analysis of the climate simulations suggests that a long-term swing of the Pacific Decadal Oscillation is the main missing cause.
Competent Systems: Effective, Efficient, Deliverable.
ERIC Educational Resources Information Center
Abramson, Bruce
Recent developments in artificial intelligence and decision analysis suggest reassessing the approaches commonly taken to the design of knowledge-based systems. Competent systems are based on models known as influence diagrams, which graphically capture a domain's basic objects and their interrelationships. Among the benefits offered by influence…
2002-07-29
suggestions, and guidance concerning the technology assessment process. References 1. Using ACEIT for Total Ownership Cost Modeling and Analysis...2001 World Population Data Sheet, Population Reference Bureau, Washington, DC List of Acronyms ACEIT – Automated Cost Estimating Integrated
A selection model for accounting for publication bias in a full network meta-analysis.
Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia
2014-12-30
Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency. Copyright © 2014 John Wiley & Sons, Ltd.
Tang, Jingchun; Lv, Honghong; Gong, Yanyan; Huang, Yao
2015-11-01
A graphene/biochar composite (G/BC) was synthesized via slow pyrolysis of graphene (G) pretreated wheat straw, and tested for the sorption characteristics and mechanisms of representative aqueous contaminants (phenanthrene and mercury). Structure and morphology analysis showed that G was coated on the surface of biochar (BC) mainly through π-π interactions, resulting in a larger surface area, more functional groups, greater thermal stability, and higher removal efficiency of phenanthrene and mercury compared to BC. Pseudo second-order model adequately simulated sorption kinetics, and sorption isotherms of phenanthrene and mercury were simulated well by dual-mode and BET models, respectively. FTIR and SEM analysis suggested that partitioning and surface sorption were dominant mechanisms for phenanthrene sorption, and that surface complexation between mercury and C-O, CC, -OH, and OC-O functional groups was responsible for mercury removal. The results suggested that the G/BC composite is an efficient, economic, and environmentally friendly multifunctional adsorbent for environmental remediation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Johnson, Jessica L; Chauvin, Sheila
2016-12-25
Objective. To examine the extent to which reflective essays written by graduating pharmacy students revealed professional identity formation and self-authorship development. Design. Following a six-week advanced pharmacy practice experience (APPE) grounded in Baxter-Magolda's Learning Partnerships Model of self-authorship development, students completed a culminating reflective essay on their rotation experiences and professional identity formation. Assessment. Thematic and categorical analysis of 41 de-identified essays revealed nine themes and evidence of all Baxter-Magolda's domains and phases of self-authorship. Analysis also suggested relationships between self-authorship and pharmacist professional identity formation. Conclusion. Results suggest that purposeful structuring of learning experiences can facilitate professional identity formation. Further, Baxter-Magolda's framework for self-authorship and use of the Learning Partnership Model seem to align well with pharmacist professional identify formation. Results of this study could be used by pharmacy faculty members when considering how to fill gaps in professional identity formation in future course and curriculum development.
Chauvin, Sheila
2016-01-01
Objective. To examine the extent to which reflective essays written by graduating pharmacy students revealed professional identity formation and self-authorship development. Design. Following a six-week advanced pharmacy practice experience (APPE) grounded in Baxter-Magolda’s Learning Partnerships Model of self-authorship development, students completed a culminating reflective essay on their rotation experiences and professional identity formation. Assessment. Thematic and categorical analysis of 41 de-identified essays revealed nine themes and evidence of all Baxter-Magolda’s domains and phases of self-authorship. Analysis also suggested relationships between self-authorship and pharmacist professional identity formation. Conclusion. Results suggest that purposeful structuring of learning experiences can facilitate professional identity formation. Further, Baxter-Magolda’s framework for self-authorship and use of the Learning Partnership Model seem to align well with pharmacist professional identify formation. Results of this study could be used by pharmacy faculty members when considering how to fill gaps in professional identity formation in future course and curriculum development. PMID:28179721
Proteomic profiling of halloysite clay nanotube exposure in intestinal cell co-culture
Lai, Xianyin; Agarwal, Mangilal; Lvov, Yuri M.; Pachpande, Chetan; Varahramyan, Kody; Witzmann, Frank A.
2013-01-01
Halloysite is aluminosilicate clay with a hollow tubular structure with nanoscale internal and external diameters. Assessment of halloysite biocompatibility has gained importance in view of its potential application in oral drug delivery. To investigate the effect of halloysite nanotubes on an in vitro model of the large intestine, Caco-2/HT29-MTX cells in monolayer co-culture were exposed to nanotubes for toxicity tests and proteomic analysis. Results indicate that halloysite exhibits a high degree of biocompatibility characterized by an absence of cytotoxicity, in spite of elevated pro-inflammatory cytokine release. Exposure-specific changes in expression were observed among 4081 proteins analyzed. Bioinformatic analysis of differentially expressed protein profiles suggest that halloysite stimulates processes related to cell growth and proliferation, subtle responses to cell infection, irritation and injury, enhanced antioxidant capability, and an overall adaptive response to exposure. These potentially relevant functional effects warrant further investigation in in vivo models and suggest that chronic or bolus occupational exposure to halloysite nanotubes may have unintended outcomes. PMID:23606564
Implementation of Teacher Consultation and Coaching in Urban Schools: A Mixed Method Study
Cappella, Elise; Jackson, Daisy R.; Kim, Ha Yeon; Bilal, Caroline; Holland, Sibyl; Atkins, Marc S.
2015-01-01
Guided by implementation science scholarship and school mental health research, the current study uses qualitative and quantitative data to illuminate the barriers, opportunities, and processes underlying the implementation of a teacher consultation and coaching model (BRIDGE) in urban elementary schools. Data come from five public elementary schools, 12 school mental health staff (BRIDGE consultants), and 18 teachers participating in a classroom-randomized trial of BRIDGE. Findings from directed content analysis of teacher focus group and interview data suggest that aspects of the BRIDGE intervention model, school organization and classroom contexts, and teachers/consultants and their relationship were relevant as implementation facilitators or barriers. In addition, case study analysis of intervention materials and fidelity tools from classrooms with moderate-to-high dosage and adherence suggest variation in consultation and coaching by initial level of observed classroom need. Results illuminate the need for implementation research to extend beyond simple indicators of fidelity to the multiple systems and variation in processes at play across levels of the implementation context. PMID:27293490
Shahid, Rizwan; Bertazzon, Stefania
2015-01-01
Body weight is an important indicator of current and future health and it is even more critical in children, who are tomorrow's adults. This paper analyzes the relationship between childhood obesity and neighbourhood walkability in Calgary, Canada. A multivariate analytical framework recognizes that childhood obesity is also associated with many factors, including socioeconomic status, foodscapes, and environmental factors, as well as less measurable factors, such as individual preferences, that could not be included in this analysis. In contrast with more conventional global analysis, this research employs localized analysis and assesses need-based interventions. The one-size-fit-all strategy may not effectively control obesity rates, since each neighbourhood has unique characteristics that need to be addressed individually. This paper presents an innovative framework combining local analysis with simulation modeling to analyze childhood obesity. Spatial models generally do not deal with simulation over time, making it cumbersome for health planners and policy makers to effectively design and implement interventions and to quantify their impact over time. This research fills this gap by integrating geographically weighted regression (GWR), which identifies vulnerable neighbourhoods and critical factors for childhood obesity, with simulation modeling, which evaluates the impact of the suggested interventions on the targeted neighbourhoods. Neighbourhood walkability was chosen as a potential target for localized interventions, owing to the crucial role of walking in developing a healthy lifestyle, as well as because increasing walkability is relatively more feasible and less expensive then modifying other factors, such as income. Simulation results suggest that local walkability interventions can achieve measurable declines in childhood obesity rates. The results are encouraging, as improvements are likely to compound over time. The results demonstrate that the integration of GWR and simulation modeling is effective, and the proposed framework can assist in designing local interventions to control and prevent childhood obesity.
Frank, Till D.; Carmody, Aimée M.; Kholodenko, Boris N.
2012-01-01
We derive a statistical model of transcriptional activation using equilibrium thermodynamics of chemical reactions. We examine to what extent this statistical model predicts synergy effects of cooperative activation of gene expression. We determine parameter domains in which greater-than-additive and less-than-additive effects are predicted for cooperative regulation by two activators. We show that the statistical approach can be used to identify different causes of synergistic greater-than-additive effects: nonlinearities of the thermostatistical transcriptional machinery and three-body interactions between RNA polymerase and two activators. In particular, our model-based analysis suggests that at low transcription factor concentrations cooperative activation cannot yield synergistic greater-than-additive effects, i.e., DNA transcription can only exhibit less-than-additive effects. Accordingly, transcriptional activity turns from synergistic greater-than-additive responses at relatively high transcription factor concentrations into less-than-additive responses at relatively low concentrations. In addition, two types of re-entrant phenomena are predicted. First, our analysis predicts that under particular circumstances transcriptional activity will feature a sequence of less-than-additive, greater-than-additive, and eventually less-than-additive effects when for fixed activator concentrations the regulatory impact of activators on the binding of RNA polymerase to the promoter increases from weak, to moderate, to strong. Second, for appropriate promoter conditions when activator concentrations are increased then the aforementioned re-entrant sequence of less-than-additive, greater-than-additive, and less-than-additive effects is predicted as well. Finally, our model-based analysis suggests that even for weak activators that individually induce only negligible increases in promoter activity, promoter activity can exhibit greater-than-additive responses when transcription factors and RNA polymerase interact by means of three-body interactions. Overall, we show that versatility of transcriptional activation is brought about by nonlinearities of transcriptional response functions and interactions between transcription factors, RNA polymerase and DNA. PMID:22506020
NASA Astrophysics Data System (ADS)
McKeen, S. A.; Angevine, W. M.; Ahmadov, R.; Frost, G. J.; Kim, S. W.; Cui, Y.; McDonald, B.; Trainer, M.; Holloway, J. S.; Ryerson, T. B.; Peischl, J.; Gambacorta, A.; Barnet, C. D.; Smith, N.; Pierce, R. B.
2016-12-01
This study presents preliminary comparisons of satellite, aircraft, and model variance spectra for meteorological, thermodynamic and gas-phase species collected during the 2013 Southeastern Nexus Air Quality Experiment (SENEX). Fourier analysis of 8 constituents collected at 1 Hz by the NOAA W-P3 aircraft in the 25 to 200 km length-scale range exhibit properties consistent with previous scale dependence studies: when spectra are averaged over several 500 mb flight legs, very linear dependence is found on log-log plots of spectral density versus inverse length-scale. Derived slopes for wind speed, temperature, H2O, CO, CO2, CH4, NOy and O3 all fall within ±30% and close to the slope of -5/3 predicted from dimensional scaling theory of isotropic turbulence. Qualitative differences are seen when a similar analysis, without quality control, is applied to a preliminary set of NUCAPS satellite retrievals over the continental U.S. during SENEX. While 500mb water vapor and column integrated water show slopes close to the -5/3 value in the 200 to 1000 km length-scale range, other quantities show significantly shallower slopes, suggesting the need for rigorous quality control. Results from WRF-Chem regional air quality model simulations at 500mb show the model is unable to account for variance on length-scales less than 6ΔX, where ΔX is the model horizontal resolution (12km). Comparisons with satellite data in the 200 to 1000km range show slopes consistent with the -5/3 power law for species such as CO, CH4 and CO2 that do not undergo reinitialization, suggesting potential for future application.
NASA Astrophysics Data System (ADS)
Hallez, Hans; Staelens, Steven; Lemahieu, Ignace
2009-10-01
EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.
Random forest meteorological normalisation models for Swiss PM10 trend analysis
NASA Astrophysics Data System (ADS)
Grange, Stuart K.; Carslaw, David C.; Lewis, Alastair C.; Boleti, Eirini; Hueglin, Christoph
2018-05-01
Meteorological normalisation is a technique which accounts for changes in meteorology over time in an air quality time series. Controlling for such changes helps support robust trend analysis because there is more certainty that the observed trends are due to changes in emissions or chemistry, not changes in meteorology. Predictive random forest models (RF; a decision tree machine learning technique) were grown for 31 air quality monitoring sites in Switzerland using surface meteorological, synoptic scale, boundary layer height, and time variables to explain daily PM10 concentrations. The RF models were used to calculate meteorologically normalised trends which were formally tested and evaluated using the Theil-Sen estimator. Between 1997 and 2016, significantly decreasing normalised PM10 trends ranged between -0.09 and -1.16 µg m-3 yr-1 with urban traffic sites experiencing the greatest mean decrease in PM10 concentrations at -0.77 µg m-3 yr-1. Similar magnitudes have been reported for normalised PM10 trends for earlier time periods in Switzerland which indicates PM10 concentrations are continuing to decrease at similar rates as in the past. The ability for RF models to be interpreted was leveraged using partial dependence plots to explain the observed trends and relevant physical and chemical processes influencing PM10 concentrations. Notably, two regimes were suggested by the models which cause elevated PM10 concentrations in Switzerland: one related to poor dispersion conditions and a second resulting from high rates of secondary PM generation in deep, photochemically active boundary layers. The RF meteorological normalisation process was found to be robust, user friendly and simple to implement, and readily interpretable which suggests the technique could be useful in many air quality exploratory data analysis situations.
Statistical theory and methodology for remote sensing data analysis
NASA Technical Reports Server (NTRS)
Odell, P. L.
1974-01-01
A model is developed for the evaluation of acreages (proportions) of different crop-types over a geographical area using a classification approach and methods for estimating the crop acreages are given. In estimating the acreages of a specific croptype such as wheat, it is suggested to treat the problem as a two-crop problem: wheat vs. nonwheat, since this simplifies the estimation problem considerably. The error analysis and the sample size problem is investigated for the two-crop approach. Certain numerical results for sample sizes are given for a JSC-ERTS-1 data example on wheat identification performance in Hill County, Montana and Burke County, North Dakota. Lastly, for a large area crop acreages inventory a sampling scheme is suggested for acquiring sample data and the problem of crop acreage estimation and the error analysis is discussed.
The association between COMT Val158Met polymorphism and migraine risk: A meta-analysis.
Liao, Yao-Jun; Jiang, Jing-Ru; Jin, San-Qing
2017-05-01
Background The COMT Val158Met polymorphism has long been regarded as a risk factor for migraine. The possible association between COMT Val158Met polymorphism and migraine has been evaluated in several studies, but the results are not consistent. Therefore, we conduct this meta-analysis to address these issues. Methods The WEB OF SCIENCE and EMBASE databases were searched for eligible studies. The odds ratio (OR) with the corresponding 95% confidence interval (CI) was calculated to estimate the strength of the association between COMT Val158Met polymorphism and migraine. Results Five studies with 979 cases and 1870 controls were ultimately included in the present meta-analysis. The overall data showed no significant association between COMT Val158Met polymorphism and migraine in the multiplicative model (OR = 0.97, 95% CI: 0.78-1.21, p = 0.805) and dominant model (OR = 1.05, 95% CI: 0.75-1.48, p = 0.773), neither in the additive model (OR = 0.97, 95% CI: 0.77-1.23, p = 0.817) nor in the recessive model (OR = 0.88, 95% CI: 0.71-1.09, p = 0.246). In subgroup analysis, both for Caucasian and Asian populations, no statistically significant associations were observed in any genetic models. Conclusions Our meta-analysis suggested that the COMT Val158Met polymorphism was not associated with migraine risk.
NASA Technical Reports Server (NTRS)
Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.
1991-01-01
Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.
Theoretical Assessment of the Impact of Climatic Factors in a Vibrio Cholerae Model.
Kolaye, G; Damakoa, I; Bowong, S; Houe, R; Békollè, D
2018-05-04
A mathematical model for Vibrio Cholerae (V. Cholerae) in a closed environment is considered, with the aim of investigating the impact of climatic factors which exerts a direct influence on the bacterial metabolism and on the bacterial reservoir capacity. We first propose a V. Cholerae mathematical model in a closed environment. A sensitivity analysis using the eFast method was performed to show the most important parameters of the model. After, we extend this V. cholerae model by taking account climatic factors that influence the bacterial reservoir capacity. We present the theoretical analysis of the model. More precisely, we compute equilibria and study their stabilities. The stability of equilibria was investigated using the theory of periodic cooperative systems with a concave nonlinearity. Theoretical results are supported by numerical simulations which further suggest the necessity to implement sanitation campaigns of aquatic environments by using suitable products against the bacteria during the periods of growth of aquatic reservoirs.
Clark, D Angus; Bowles, Ryan P
2018-04-23
In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.
Integrated wetland management: an analysis with group model building based on system dynamics model.
Chen, Hsin; Chang, Yang-Chi; Chen, Kung-Chen
2014-12-15
The wetland system possesses diverse functions such as preserving water sources, mediating flooding, providing habitats for wildlife and stabilizing coastlines. Nonetheless, rapid economic growth and the increasing population have significantly deteriorated the wetland environment. To secure the sustainability of the wetland, it is essential to introduce integrated and systematic management. This paper examines the resource management of the Jiading Wetland by applying group model building (GMB) and system dynamics (SD). We systematically identify local stakeholders' mental model regarding the impact brought by the yacht industry, and further establish a SD model to simulate the dynamic wetland environment. The GMB process improves the stakeholders' understanding about the interaction between the wetland environment and management policies. Differences between the stakeholders' perceptions and the behaviors shown by the SD model also suggest that our analysis would facilitate the stakeholders to broaden their horizons and achieve consensus on the wetland resource management. Copyright © 2014 Elsevier Ltd. All rights reserved.
Negative Stress Margins - Are They Real?
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Lee, Darlene S.; Mohaghegh, Michael
2011-01-01
Advances in modeling and simulation, new finite element software, modeling engines and powerful computers are providing opportunities to interrogate designs in a very different manner and in a more detailed approach than ever before. Margins of safety are also often evaluated using local stresses for various design concepts and design parameters quickly once analysis models are defined and developed. This paper suggests that not all the negative margins of safety evaluated are real. The structural areas where negative margins are frequently encountered are often near stress concentrations, point loads and load discontinuities, near locations of stress singularities, in areas having large gradients but with insufficient mesh density, in areas with modeling issues and modeling errors, and in areas with connections and interfaces, in two-dimensional (2D) and three-dimensional (3D) transitions, bolts and bolt modeling, and boundary conditions. Now, more than ever, structural analysts need to examine and interrogate their analysis results and perform basic sanity checks to determine if these negative margins are real.
Population genetic analysis of a global collection of Fragaria vesca using microsatellite markers
Hilmarsson, Hrannar Smári; Hytönen, Timo; Isobe, Sachiko; Göransson, Magnus; Toivainen, Tuomas
2017-01-01
The woodland strawberry, Fragaria vesca, holds great promise as a model organism. It not only represents the important Rosaceae family that includes economically important species such as apples, pears, peaches and roses, but it also complements the well-known model organism Arabidopsis thaliana in key areas such as perennial life cycle and the development of fleshy fruit. Analysis of wild populations of A. thaliana has shed light on several important developmental pathways controlling, for example, flowering time and plant growth, suggesting that a similar approach using F. vesca might add to our understanding on the development of rosaceous species and perennials in general. As a first step, 298 F. vesca plants were analyzed using microsatellite markers with the primary aim of analyzing population structure and distribution of genetic diversity. Of the 68 markers tested, 56 were polymorphic, with an average of 4.46 alleles per locus. Our analysis partly confirms previous classification of F. vesca subspecies in North America and suggests two groups within the subsp. bracteata. In addition, F. vesca subsp. vesca forms a single global population with evidence that the Icelandic group is a separate cluster from the main Eurasian population. PMID:28854285
Population genetic analysis of a global collection of Fragaria vesca using microsatellite markers.
Hilmarsson, Hrannar Smári; Hytönen, Timo; Isobe, Sachiko; Göransson, Magnus; Toivainen, Tuomas; Hallsson, Jón Hallsteinn
2017-01-01
The woodland strawberry, Fragaria vesca, holds great promise as a model organism. It not only represents the important Rosaceae family that includes economically important species such as apples, pears, peaches and roses, but it also complements the well-known model organism Arabidopsis thaliana in key areas such as perennial life cycle and the development of fleshy fruit. Analysis of wild populations of A. thaliana has shed light on several important developmental pathways controlling, for example, flowering time and plant growth, suggesting that a similar approach using F. vesca might add to our understanding on the development of rosaceous species and perennials in general. As a first step, 298 F. vesca plants were analyzed using microsatellite markers with the primary aim of analyzing population structure and distribution of genetic diversity. Of the 68 markers tested, 56 were polymorphic, with an average of 4.46 alleles per locus. Our analysis partly confirms previous classification of F. vesca subspecies in North America and suggests two groups within the subsp. bracteata. In addition, F. vesca subsp. vesca forms a single global population with evidence that the Icelandic group is a separate cluster from the main Eurasian population.
NASA Astrophysics Data System (ADS)
Wu, J.; Clark, C. J.; Pletsch, H. J.; Guillemot, L.; Johnson, T. J.; Torne, P.; Champion, D. J.; Deneva, J.; Ray, P. S.; Salvetti, D.; Kramer, M.; Aulbert, C.; Beer, C.; Bhattacharyya, B.; Bock, O.; Camilo, F.; Cognard, I.; Cuéllar, A.; Eggenstein, H. B.; Fehrmann, H.; Ferrara, E. C.; Kerr, M.; Machenschalk, B.; Ransom, S. M.; Sanpa-Arsa, S.; Wood, K.
2018-02-01
We report on the analysis of 13 gamma-ray pulsars discovered in the Einstein@Home blind search survey using Fermi Large Area Telescope (LAT) Pass 8 data. The 13 new gamma-ray pulsars were discovered by searching 118 unassociated LAT sources from the third LAT source catalog (3FGL), selected using the Gaussian Mixture Model machine-learning algorithm on the basis of their gamma-ray emission properties being suggestive of pulsar magnetospheric emission. The new gamma-ray pulsars have pulse profiles and spectral properties similar to those of previously detected young gamma-ray pulsars. Follow-up radio observations have revealed faint radio pulsations from two of the newly discovered pulsars and enabled us to derive upper limits on the radio emission from the others, demonstrating that they are likely radio-quiet gamma-ray pulsars. We also present results from modeling the gamma-ray pulse profiles and radio profiles, if available, using different geometric emission models of pulsars. The high discovery rate of this survey, despite the increasing difficulty of blind pulsar searches in gamma rays, suggests that new systematic surveys such as presented in this article should be continued when new LAT source catalogs become available.
Ecosystem Modeling Applied to Nutrient Criteria Development in Rivers
NASA Astrophysics Data System (ADS)
Carleton, James N.; Park, Richard A.; Clough, Jonathan S.
2009-09-01
Threshold concentrations for biological impairment by nutrients are difficult to quantify in lotic systems, yet States and Tribes in the United States are charged with developing water quality criteria to protect these ecosystems from excessive enrichment. The analysis described in this article explores the use of the ecosystem model AQUATOX to investigate impairment thresholds keyed to biological indexes that can be simulated. The indexes selected for this exercise include percentage cyanobacterial biomass of sestonic algae, and benthic chlorophyll a. The calibrated model was used to analyze responses of these indexes to concurrent reductions in phosphorus, nitrogen, and suspended sediment in an enriched upper Midwestern river. Results suggest that the indexes would respond strongly to changes in phosphorus and suspended sediment, and less strongly to changes in nitrogen concentration. Using simulated concurrent reductions in all three water quality constituents, a total phosphorus concentration of 0.1 mg/l was identified as a threshold concentration, and therefore a hypothetical water quality criterion, for prevention of both excessive periphyton growth and sestonic cyanobacterial blooms. This kind of analysis is suggested as a way to evaluate multiple contrasting impacts of hypothetical nutrient and sediment reductions and to define nutrient criteria or target concentrations that balance multiple management objectives concurrently.
Calabrese, Evan; Du, Fu; Garman, Robert H.; Johnson, G. Allan; Riccio, Cory; Tong, Lawrence C.
2014-01-01
Abstract Blast-induced traumatic brain injury (bTBI) is one of the most common combat-related injuries seen in U.S. military personnel, yet relatively little is known about the underlying mechanisms of injury. In particular, the effects of the primary blast pressure wave are poorly understood. Animal models have proven invaluable for the study of primary bTBI, because it rarely occurs in isolation in human subjects. Even less is known about the effects of repeated primary blast wave exposure, but existing data suggest cumulative increases in brain damage with a second blast. MRI and, in particular, diffusion tensor imaging (DTI), have become important tools for assessing bTBI in both clinical and preclinical settings. Computational statistical methods such as voxelwise analysis have shown promise in localizing and quantifying bTBI throughout the brain. In this study, we use voxelwise analysis of DTI to quantify white matter injury in a rat model of repetitive primary blast exposure. Our results show a significant increase in microstructural damage with a second blast exposure, suggesting that primary bTBI may sensitize the brain to subsequent injury. PMID:24392843
The deployment and training of teachers for remote rural schools in less-developed countries
NASA Astrophysics Data System (ADS)
Ankrah-Dove, Linda
1982-03-01
In less-developed countries schools in remote rural areas are likely to be poor in quality. One important aspect of this in certain contexts is the comparatively low quality of teachers and the high rate of teacher turnover in rural schools in these areas. It is likely that contributory factors are the ways in which posting and transfer procedures operate, inadequate preparation and support for teachers, and their own characteristics, values and interests. For purposes of analysis, two models are suggested which illuminate the policy assumptions behind different strategies used to try to remedy the situation. The rural deficit model tends to encourage the use of compulsory posting and incentives while the rural challenge model searches for better ways of preparing teachers for service in remote rural schools. From analysis of the literature, the author suggests that there are four inter-related features of contemporary teacher-education programmes which have potential and should be developed if good teachers are to be attracted to and retained in remote rural schools. These are field-based preparation, teamwork in training, community support of training and the recruitment and preparation of local teachers. A few examples of schemes employing these principles are described briefly.
Moving beyond qualitative evaluations of Bayesian models of cognition.
Hemmer, Pernille; Tauber, Sean; Steyvers, Mark
2015-06-01
Bayesian models of cognition provide a powerful way to understand the behavior and goals of individuals from a computational point of view. Much of the focus in the Bayesian cognitive modeling approach has been on qualitative model evaluations, where predictions from the models are compared to data that is often averaged over individuals. In many cognitive tasks, however, there are pervasive individual differences. We introduce an approach to directly infer individual differences related to subjective mental representations within the framework of Bayesian models of cognition. In this approach, Bayesian data analysis methods are used to estimate cognitive parameters and motivate the inference process within a Bayesian cognitive model. We illustrate this integrative Bayesian approach on a model of memory. We apply the model to behavioral data from a memory experiment involving the recall of heights of people. A cross-validation analysis shows that the Bayesian memory model with inferred subjective priors predicts withheld data better than a Bayesian model where the priors are based on environmental statistics. In addition, the model with inferred priors at the individual subject level led to the best overall generalization performance, suggesting that individual differences are important to consider in Bayesian models of cognition.
Mathematical Modeling of Intravascular Blood Coagulation under Wall Shear Stress
Rukhlenko, Oleksii S.; Dudchenko, Olga A.; Zlobina, Ksenia E.; Guria, Georgy Th.
2015-01-01
Increased shear stress such as observed at local stenosis may cause drastic changes in the permeability of the vessel wall to procoagulants and thus initiate intravascular blood coagulation. In this paper we suggest a mathematical model to investigate how shear stress-induced permeability influences the thrombogenic potential of atherosclerotic plaques. Numerical analysis of the model reveals the existence of two hydrodynamic thresholds for activation of blood coagulation in the system and unveils typical scenarios of thrombus formation. The dependence of blood coagulation development on the intensity of blood flow, as well as on geometrical parameters of atherosclerotic plaque is described. Relevant parametric diagrams are drawn. The results suggest a previously unrecognized role of relatively small plaques (resulting in less than 50% of the lumen area reduction) in atherothrombosis and have important implications for the existing stenting guidelines. PMID:26222505
Stable water isotope behavior during the last glacial maximum: A general circulation model analysis
NASA Technical Reports Server (NTRS)
Jouzel, Jean; Koster, Randal D.; Suozzo, Robert J.; Russell, Gary L.
1994-01-01
Global water isotope geochemisty during the last glacial maximum (LGM) is simulated with an 8 deg x 10 deg atmospheric general circulation model (GCM). The simulation results suggest that the spatial delta O-18/temperature relationships observed for the present day and LGM climates are very similar. Furthermore, the temporal delta O-18/temperature relationship is similar to the present-day spatial relationship in regions for which the LGM/present-day temperature change is significant. This helps justify the standard practice of applying the latter to the interpretation of paleodata, despite the possible influence of other factors, such as changes in the evaportive sources of precipitation or in the seasonality of precipitation. The model suggests, for example, that temperature shifts inferred from ice core data may differ from the true shifts by only about 30%.
Comparative analysis of stress in a new proposal of dental implants.
Valente, Mariana Lima da Costa; de Castro, Denise Tornavoi; Macedo, Ana Paula; Shimano, Antonio Carlos; Dos Reis, Andréa Cândido
2017-08-01
The purpose of this study was to compare, through photoelastic analysis, the stress distribution around conventional and modified external hexagon (EH) and morse taper (MT) dental implant connections. Four photoelastic models were prepared (n=1): Model 1 - conventional EH cylindrical implant (Ø 4.0mm×11mm - Neodent®), Model 2 - modified EH cylindrical implant, Model 3 - conventional MT Conical implant (Ø 4.3mm×10mm - Neodent®) and Model 4 - modified MT conical implant. 100 and 150N axial and oblique loads (30° tilt) were applied in the devices coupled to the implants. A plane transmission polariscope was used in the analysis of fringes and each position of interest was recorded by a digital camera. The Tardy method was used to quantify the fringe order (n), that calculates the maximum shear stress (τ) value in each selected point. The results showed lower stress concentration in the modified cylindrical implant (EH) compared to the conventional model, with application of 150N axial and 100N oblique loads. Lower stress was observed for the modified conical (MT) implant with the application of 100 and 150N oblique loads, which was not observed for the conventional implant model. The comparative analysis of the models showed that the new design proposal generates good stress distribution, especially in the cervical third, suggesting the preservation of bone tissue in the bone crest region. Copyright © 2017 Elsevier B.V. All rights reserved.
A recessive genetic model and runs of homozygosity in major depressive disorder
Power, Robert A.; Keller, Matthew C.; Ripke, Stephan; Abdellaoui, Abdel; Wray, Naomi R.; Sullivan, Patrick F; Breen, Gerome
2014-01-01
Genome-wide association studies (GWASs) of major depressive disorder (MDD) have yet to identify variants that surpass the threshold for genome-wide significance. A recent study reported that runs of homozygosity (ROH) are associated with schizophrenia, reflecting a novel genetic risk factor resulting from increased parental relatedness and recessive genetic effects. Here we undertake an analysis of ROH for MDD using the 9,238 MDD cases and 9,521 controls reported in a recent mega-analysis of 9 GWAS. Since evidence for association with ROH could reflect a recessive mode of action at loci, we also conducted a genome-wide association analyses under a recessive model. The genome-wide association analysis using a recessive model found no significant associations. Our analysis of ROH suggested that there was significant heterogeneity of effect across studies in effect (p=0.001), and it was associated with genotyping platform and country of origin. The results of the ROH analysis show that differences across studies can lead to conflicting systematic genome-wide differences between cases and controls that are unaccounted for by traditional covariates. They highlight the sensitivity of the ROH method to spurious associations, and the need to carefully control for potential confounds in such analyses. We found no strong evidence for a recessive model underlying MDD. PMID:24482242
NASA Astrophysics Data System (ADS)
Maslovskaya, A. G.; Barabash, T. K.
2018-03-01
The paper presents the results of the fractal and multifractal analysis of polarization switching current in ferroelectrics under electron irradiation, which allows statistical memory effects to be estimated at dynamics of domain structure. The mathematical model of formation of electron beam-induced polarization current in ferroelectrics was suggested taking into account the fractal nature of domain structure dynamics. In order to realize the model the computational scheme was constructed using the numerical solution approximation of fractional differential equation. Evidences of electron beam-induced polarization switching process in ferroelectrics were specified at a variation of control model parameters.
NASA Technical Reports Server (NTRS)
Morris, R. E.
1973-01-01
An experimental plastic strain measurement system is presented for use on the surface of high velocity impact test models. The system was used on a hollow sphere tested in impact against a reinforced concrete block. True strains, deviatoric stresses, and true stresses were calculated from experimental measurements. The maximum strain measured in the model was small compared to the true failure strain obtained from static tensile tests of model material. This fact suggests that a much greater impact velocity would be required to cause failure of the model shell structure.
Embodied Agents, E-SQ and Stickiness: Improving Existing Cognitive and Affective Models
NASA Astrophysics Data System (ADS)
de Diesbach, Pablo Brice
This paper synthesizes results from two previous studies of embodied virtual agents on commercial websites. We analyze and criticize the proposed models and discuss the limits of the experimental findings. Results from other important research in the literature are integrated. We also integrate concepts from profound, more business-related, analysis that deepens on the mechanisms of rhetoric in marketing and communication, and the possible role of E-SQ in man-agent interaction. We finally suggest a refined model for the impacts of these agents on web site users, and limits of the improved model are commented.
Landslide risk models for decision making.
Bonachea, Jaime; Remondo, Juan; de Terán, José Ramón Díaz; González-Díez, Alberto; Cendrero, Antonio
2009-11-01
This contribution presents a quantitative procedure for landslide risk analysis and zoning considering hazard, exposure (or value of elements at risk), and vulnerability. The method provides the means to obtain landslide risk models (expressing expected damage due to landslides on material elements and economic activities in monetary terms, according to different scenarios and periods) useful to identify areas where mitigation efforts will be most cost effective. It allows identifying priority areas for the implementation of actions to reduce vulnerability (elements) or hazard (processes). The procedure proposed can also be used as a preventive tool, through its application to strategic environmental impact analysis (SEIA) of land-use plans. The underlying hypothesis is that reliable predictions about hazard and risk can be made using models based on a detailed analysis of past landslide occurrences in connection with conditioning factors and data on past damage. The results show that the approach proposed and the hypothesis formulated are essentially correct, providing estimates of the order of magnitude of expected losses for a given time period. Uncertainties, strengths, and shortcomings of the procedure and results obtained are discussed and potential lines of research to improve the models are indicated. Finally, comments and suggestions are provided to generalize this type of analysis.
Chen, Hou-Jen; Wright, Graham A
2017-01-01
To characterize and interpret arterial spin labeling (ASL) reactive hyperemia of calf muscles for a better understanding of the microcirculation in peripheral arterial disease (PAD), we present a physiological model incorporating oxygen transport, tissue metabolism, and vascular regulation mechanisms. The model demonstrated distinct effects between arterial stenoses and microvascular dysfunction on reactive hyperemia, and indicated a higher sensitivity of 2-minute thigh cuffing to microvascular dysfunction than 5-minute cuffing. The recorded perfusion responses in PAD patients (n = 9) were better differentiated from the normal subjects (n = 7) using the model-based analysis rather than characterization using the apparent peak and time-to-peak of the responses. The analysis results suggested different amounts of microvascular disease within the patient group. Overall, this work demonstrates a novel analysis method and facilitates understanding of the physiology involved in ASL reactive hyperemia. ASL reactive hyperemia with model-based analysis may be used as a noninvasive microvascular assessment in the presence of arterial stenoses, allowing us to look beyond the macrovascular disease in PAD. A subgroup who will have a poor prognosis after revascularization in the patients with critical limb ischemia may be associated with more severe microvascular diseases, which may potentially be identified using ASL reactive hyperemia.
Nakamura, Keiko; Tajima, Kiyoshi; Chen, Ker-Kong; Nagamatsu, Yuki; Kakigawa, Hiroshi; Masumi, Shin-ich
2013-12-01
This study focused on the application of novel finite-element analysis software for constructing a finite-element model from the computed tomography data of a human dentulous mandible. The finite-element model is necessary for evaluating the mechanical response of the alveolar part of the mandible, resulting from occlusal force applied to the teeth during biting. Commercially available patient-specific general computed tomography-based finite-element analysis software was solely applied to the finite-element analysis for the extraction of computed tomography data. The mandibular bone with teeth was extracted from the original images. Both the enamel and the dentin were extracted after image processing, and the periodontal ligament was created from the segmented dentin. The constructed finite-element model was reasonably accurate using a total of 234,644 nodes and 1,268,784 tetrahedral and 40,665 shell elements. The elastic moduli of the heterogeneous mandibular bone were determined from the bone density data of the computed tomography images. The results suggested that the software applied in this study is both useful and powerful for creating a more accurate three-dimensional finite-element model of a dentulous mandible from the computed tomography data without the need for any other software.
Razi, Bahman; Anani Sarab, Gholamreza; Omidkhoda, Azadeh; Alizadeh, Shahab
2018-03-01
Several studies have evaluated the association between the multidrug resistance 1 (MDR1) polymorphism (rs1045642 C > T) and multiple myeloma (MM). However, the results were not consistent. Therefore, to reach a comprehensive and reliable answer we determined the association of the MDR1 (rs1045642 C > T) polymorphism and MM in the context of meta-analysis. All eligible studies published in EMBASE, PubMed, and Web of Science databases before July 2017 were reviewed. Subsequently, to assess the strength of association in the dominant model, recessive model, allelic model, homozygotes contrast, and heterozygotes contrast, pooled odds ratios and 95% confidence intervals (CIs) were calculated by the fixed effects model. A total of four case-control studies with 395 MM cases and 418 healthy controls were included in the meta-analysis. The overall results showed no significant association between the MDR1 (rs1045642 C > T) polymorphism and the risk of MM in genetic models (dominant model: OR = 1.04, 95% CI = 0.78-1.38; recessive model: OR = 0.74, 95% CI = 0.52-1.06; allelic model: OR = 0.90, 95% CI = 0.73-1.11; TT vs. CC: OR = 0.80, 95% CI = 0.51-1.25; and CT vs. CC: OR = 1.12, 95% CI = 0.77-1.62). No evidence of publication bias was detected except for the analysis of the recessive model. This meta-analysis suggests that the MDR1 C > T polymorphism was not associated with the risk of MM. To confirm these findings, further comprehensive and well-designed studies are needed.
Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki
2014-05-01
A single cell can form a colony, and ionizing irradiation has long been known to reduce such a cellular clonogenic potential. Analysis of abortive colonies unable to continue to grow should provide important information on the reproductive cell death (RCD) following irradiation. Our previous analysis with a branching process model showed that the RCD in normal human fibroblasts can persist over 16 generations following irradiation with low linear energy transfer (LET) γ-rays. Here we further set out to evaluate the RCD persistency in abortive colonies arising from normal human fibroblasts exposed to high-LET carbon ions (18.3 MeV/u, 108 keV/µm). We found that the abortive colony size distribution determined by biological experiments follows a linear relationship on the log-log plot, and that the Monte Carlo simulation using the RCD probability estimated from such a linear relationship well simulates the experimentally determined surviving fraction and the relative biological effectiveness (RBE). We identified the short-term phase and long-term phase for the persistent RCD following carbon-ion irradiation, which were similar to those previously identified following γ-irradiation. Taken together, our results suggest that subsequent secondary or tertiary colony formation would be invaluable for understanding the long-lasting RCD. All together, our framework for analysis with a branching process model and a colony formation assay is applicable to determination of cellular responses to low- and high-LET radiation, and suggests that the long-lasting RCD is a pivotal determinant of the surviving fraction and the RBE.
Anastasio, Thomas J
2013-01-01
Fear conditioning, in which a cue is conditioned to elicit a fear response, and extinction, in which a previously conditioned cue no longer elicits a fear response, depend on neural plasticity occurring within the amygdala. Projection neurons in the basolateral amygdala (BLA) learn to respond to the cue during fear conditioning, and they mediate fear responding by transferring cue signals to the output stage of the amygdala. Some BLA projection neurons retain their cue responses after extinction. Recent work shows that activation of the endocannabinoid system is necessary for extinction, and it leads to long-term depression (LTD) of the GABAergic synapses that inhibitory interneurons make onto BLA projection neurons. Such GABAergic LTD would enhance the responses of the BLA projection neurons that mediate fear responding, so it would seem to oppose, rather than promote, extinction. To address this paradox, a computational analysis of two well-known conceptual models of amygdaloid plasticity was undertaken. The analysis employed exhaustive state-space search conducted within a declarative programming environment. The analysis reveals that GABAergic LTD actually increases the number of synaptic strength configurations that achieve extinction while preserving the cue responses of some BLA projection neurons in both models. The results suggest that GABAergic LTD helps the amygdala retain cue memory during extinction even as the amygdala learns to suppress the previously conditioned response. The analysis also reveals which features of both models are essential for their ability to achieve extinction with some cue memory preservation, and suggests experimental tests of those features.
Anastasio, Thomas J.
2013-01-01
Fear conditioning, in which a cue is conditioned to elicit a fear response, and extinction, in which a previously conditioned cue no longer elicits a fear response, depend on neural plasticity occurring within the amygdala. Projection neurons in the basolateral amygdala (BLA) learn to respond to the cue during fear conditioning, and they mediate fear responding by transferring cue signals to the output stage of the amygdala. Some BLA projection neurons retain their cue responses after extinction. Recent work shows that activation of the endocannabinoid system is necessary for extinction, and it leads to long-term depression (LTD) of the GABAergic synapses that inhibitory interneurons make onto BLA projection neurons. Such GABAergic LTD would enhance the responses of the BLA projection neurons that mediate fear responding, so it would seem to oppose, rather than promote, extinction. To address this paradox, a computational analysis of two well-known conceptual models of amygdaloid plasticity was undertaken. The analysis employed exhaustive state-space search conducted within a declarative programming environment. The analysis reveals that GABAergic LTD actually increases the number of synaptic strength configurations that achieve extinction while preserving the cue responses of some BLA projection neurons in both models. The results suggest that GABAergic LTD helps the amygdala retain cue memory during extinction even as the amygdala learns to suppress the previously conditioned response. The analysis also reveals which features of both models are essential for their ability to achieve extinction with some cue memory preservation, and suggests experimental tests of those features. PMID:23761759
Kinematic analysis and simulation of a substation inspection robot guided by magnetic sensor
NASA Astrophysics Data System (ADS)
Xiao, Peng; Luan, Yiqing; Wang, Haipeng; Li, Li; Li, Jianxiang
2017-01-01
In order to improve the performance of the magnetic navigation system used by substation inspection robot, the kinematic characteristics is analyzed based on a simplified magnetic guiding system model, and then the simulation process is executed to verify the reasonability of the whole analysis procedure. Finally, some suggestions are extracted out, which will be helpful to guide the design of the inspection robot system in the future.
Band-to-Band Tunnel Transistor Design and Modeling for Low Power Applications
2012-05-10
suggestions for reducing this burden, to Washington Headquarters Services , Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway...Fabrication 4.3 Analysis of the Silicided Source TFET 4.4 Subthreshold Swing Data Quality Analysis 4.5 Selective Silicide Using Germanium 4.6... International Electron Devices Meeting (IEDM) Short Course, 2007 [1.3] W. Y. Choi, B.-K. Park, J. D. Lee, and T.-J. King Liu, “Tunneling Field-Effect
Universality of qT resummation for electroweak boson production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konychev, Anton V.; Nadolsky, Pavel M.
We perform a global analysis of transverse momentum distributions in Drell-Yan pair and Z boson production in order to investigate universality of nonperturbative contributions to the Collins-Soper-Sterman resummed form factor. Our fit made in an improved nonperturbative model suggests that the nonperturbative contributions follow universal nearly-linear dependence on the logarithm of the heavy boson invariant mass Q, which closely agrees with an estimate from the infrared renormalon analysis.
Universality of q{sub T} resummation for electroweak boson production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konychev, A. V.; Nadolsky, P. M.; High Energy Physics
We perform a global analysis of transverse momentum distributions in Drell-Yan pair and Z boson production in order to investigate universality of nonperturbative contributions to the Collins-Soper-Sterman resummed form factor. Our fit made in an improved nonperturbative model suggests that the nonperturbative contributions follow universal nearly-linear dependence on the logarithm of the heavy boson invariant mass Q, which closely agrees with an estimate from the infrared renormalon analysis.
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Probabilistic interpretation of Peelle's pertinent puzzle and its resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, Kenneth M.; Kawano, T.; Talou, P.
2004-01-01
Peelle's Pertinent Puzzle (PPP) states a seemingly plausible set of measurements with their covariance matrix, which produce an implausible answer. To answer the PPP question, we describe a reasonable experimental situation that is consistent with the PPP solution. The confusion surrounding the PPP arises in part because of its imprecise statement, which permits to a variety of interpretations and resulting answers, some of which seem implausible. We emphasize the importance of basing the analysis on an unambiguous probabilistic model that reflects the experimental situation. We present several different models of how the measurements quoted in the PPP problem could bemore » obtained, and interpret their solution in terms of a detailed probabilistic analysis. We suggest a probabilistic approach to handling uncertainties about which model to use.« less
Probabilistic Interpretation of Peelle's Pertinent Puzzle and its Resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, Kenneth M.; Kawano, Toshihiko; Talou, Patrick
2005-05-24
Peelle's Pertinent Puzzle (PPP) states a seemingly plausible set of measurements with their covariance matrix, which produce an implausible answer. To answer the PPP question, we describe a reasonable experimental situation that is consistent with the PPP solution. The confusion surrounding the PPP arises in part because of its imprecise statement, which permits to a variety of interpretations and resulting answers, some of which seem implausible. We emphasize the importance of basing the analysis on an unambiguous probabilistic model that reflects the experimental situation. We present several different models of how the measurements quoted in the PPP problem could bemore » obtained, and interpret their solution in terms of a detailed probabilistic analysis. We suggest a probabilistic approach to handling uncertainties about which model to use.« less
Some Observations on the Current Status of Performing Finite Element Analyses
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Knight, Norman F., Jr; Shivakumar, Kunigal N.
2015-01-01
Aerospace structures are complex high-performance structures. Advances in reliable and efficient computing and modeling tools are enabling analysts to consider complex configurations, build complex finite element models, and perform analysis rapidly. Many of the early career engineers of today are very proficient in the usage of modern computers, computing engines, complex software systems, and visualization tools. These young engineers are becoming increasingly efficient in building complex 3D models of complicated aerospace components. However, the current trends demonstrate blind acceptance of the results of the finite element analysis results. This paper is aimed at raising an awareness of this situation. Examples of the common encounters are presented. To overcome the current trends, some guidelines and suggestions for analysts, senior engineers, and educators are offered.
Influence factors and forecast of carbon emission in China: structure adjustment for emission peak
NASA Astrophysics Data System (ADS)
Wang, B.; Cui, C. Q.; Li, Z. P.
2018-02-01
This paper introduced Principal Component Analysis and Multivariate Linear Regression Model to verify long-term balance relationships between Carbon Emissions and the impact factors. The integrated model of improved PCA and multivariate regression analysis model is attainable to figure out the pattern of carbon emission sources. Main empirical results indicate that among all selected variables, the role of energy consumption scale was largest. GDP and Population follow and also have significant impacts on carbon emission. Industrialization rate and fossil fuel proportion, which is the indicator of reflecting the economic structure and energy structure, have a higher importance than the factor of urbanization rate and the dweller consumption level of urban areas. In this way, some suggestions are put forward for government to achieve the peak of carbon emissions.
Source term model evaluations for the low-level waste facility performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yim, M.S.; Su, S.I.
1995-12-31
The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.
Comparing the costs of three prostate cancer follow-up strategies: a cost minimisation analysis.
Pearce, Alison M; Ryan, Fay; Drummond, Frances J; Thomas, Audrey Alforque; Timmons, Aileen; Sharp, Linda
2016-02-01
Prostate cancer follow-up is traditionally provided by clinicians in a hospital setting. Growing numbers of prostate cancer survivors mean that this model of care may not be economically sustainable, and a number of alternative approaches have been suggested. The aim of this study was to develop an economic model to compare the costs of three alternative strategies for prostate cancer follow-up in Ireland-the European Association of Urology (EAU) guidelines, the National Institute of Health Care Excellence (NICE) guidelines and current practice. A cost minimisation analysis was performed using a Markov model with three arms (EAU guidelines, NICE guidelines and current practice) comparing follow-up for men with prostate cancer treated with curative intent. The model took a health care payer's perspective over a 10-year time horizon. Current practice was the least cost efficient arm of the model, the NICE guidelines were most cost efficient (74 % of current practice costs) and the EAU guidelines intermediate (92 % of current practice costs). For the 2562 new cases of prostate cancer diagnosed in 2009, the Irish health care system could have saved €760,000 over a 10-year period if the NICE guidelines were adopted. This is the first study investigating costs of prostate cancer follow-up in the Irish setting. While economic models are designed as a simplification of complex real-world situations, these results suggest potential for significant savings within the Irish health care system associated with implementation of alternative models of prostate cancer follow-up care.
Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas; ...
2016-02-24
In this study, microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of processmore » rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas
In this study, microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of processmore » rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.« less
Gonzalez Bernaldo de Quiros, Fernan; Dawidowski, Adriana R; Figar, Silvana
2017-02-01
In this study, we aimed: 1) to conceptualize the theoretical challenges facing health information systems (HIS) to represent patients' decisions about health and medical treatments in everyday life; 2) to suggest approaches for modeling these processes. The conceptualization of the theoretical and methodological challenges was discussed in 2015 during a series of interdisciplinary meetings attended by health informatics staff, epidemiologists and health professionals working in quality management and primary and secondary prevention of chronic diseases of the Hospital Italiano de Buenos Aires, together with sociologists, anthropologists and e-health stakeholders. HIS are facing the need and challenge to represent social human processes based on constructivist and complexity theories, which are the current frameworks of human sciences for understanding human learning and socio-cultural changes. Computer systems based on these theories can model processes of social construction of concrete and subjective entities and the interrelationships between them. These theories could be implemented, among other ways, through the mapping of health assets, analysis of social impact through community trials and modeling of complexity with system simulation tools. This analysis suggested the need to complement the traditional linear causal explanations of disease onset (and treatments) that are the bases for models of analysis of HIS with constructivist and complexity frameworks. Both may enlighten the complex interrelationships among patients, health services and the health system. The aim of this strategy is to clarify people's decision making processes to improve the efficiency, quality and equity of the health services and the health system.
Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas; Siciliano, Steven; Breulmann, Marc; Yannarell, Anthony; Beman, J. M.; Abell, Guy; Philippot, Laurent; Prosser, James; Foulquier, Arnaud; Yuste, Jorge C.; Glanville, Helen C.; Jones, Davey L.; Angel, Roey; Salminen, Janne; Newton, Ryan J.; Bürgmann, Helmut; Ingram, Lachlan J.; Hamer, Ute; Siljanen, Henri M. P.; Peltoniemi, Krista; Potthast, Karin; Bañeras, Lluís; Hartmann, Martin; Banerjee, Samiran; Yu, Ri-Qing; Nogaro, Geraldine; Richter, Andreas; Koranda, Marianne; Castle, Sarah C.; Goberna, Marta; Song, Bongkeun; Chatterjee, Amitava; Nunes, Olga C.; Lopes, Ana R.; Cao, Yiping; Kaisermann, Aurore; Hallin, Sara; Strickland, Michael S.; Garcia-Pausas, Jordi; Barba, Josep; Kang, Hojeong; Isobe, Kazuo; Papaspyrou, Sokratis; Pastorelli, Roberta; Lagomarsino, Alessandra; Lindström, Eva S.; Basiliko, Nathan; Nemergut, Diana R.
2016-01-01
Microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of process rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology. PMID:26941732
Graham, Emily B; Knelman, Joseph E; Schindlbacher, Andreas; Siciliano, Steven; Breulmann, Marc; Yannarell, Anthony; Beman, J M; Abell, Guy; Philippot, Laurent; Prosser, James; Foulquier, Arnaud; Yuste, Jorge C; Glanville, Helen C; Jones, Davey L; Angel, Roey; Salminen, Janne; Newton, Ryan J; Bürgmann, Helmut; Ingram, Lachlan J; Hamer, Ute; Siljanen, Henri M P; Peltoniemi, Krista; Potthast, Karin; Bañeras, Lluís; Hartmann, Martin; Banerjee, Samiran; Yu, Ri-Qing; Nogaro, Geraldine; Richter, Andreas; Koranda, Marianne; Castle, Sarah C; Goberna, Marta; Song, Bongkeun; Chatterjee, Amitava; Nunes, Olga C; Lopes, Ana R; Cao, Yiping; Kaisermann, Aurore; Hallin, Sara; Strickland, Michael S; Garcia-Pausas, Jordi; Barba, Josep; Kang, Hojeong; Isobe, Kazuo; Papaspyrou, Sokratis; Pastorelli, Roberta; Lagomarsino, Alessandra; Lindström, Eva S; Basiliko, Nathan; Nemergut, Diana R
2016-01-01
Microorganisms are vital in mediating the earth's biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: 'When do we need to understand microbial community structure to accurately predict function?' We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of process rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.
Cost Sharing in Public Universities: A Kenyan Case Study.
ERIC Educational Resources Information Center
Rodrigues, Anthony J.; Wandiga, Shem O.
1997-01-01
Presents an analysis of government policy on higher education finance in Kenya, outlines parameters of an appropriate tuition policy (cost recovery, equity, equal access, affordability, student loan program objectives), and provides a model for simulating loan program outcomes. Suggests several proposed policy and administrative reforms concerning…
Faculty Planning and Affirmative Action
ERIC Educational Resources Information Center
Linnell, Robert H.; Gray, Paul
1977-01-01
The use of a model to examine the impact of affirmative action policies on the female/male ratio of a faculty of natural sciences is reported. Increased analysis and detailed study of suggested alternatives can lead to better resolution of the problems and potentials of affirmative action programs. (LBH)
ERIC Educational Resources Information Center
Panza, Carol M.
2001-01-01
Suggests that human performance technologists need to have an analysis approach to support the development of an appropriate set of improvement recommendations for clients and then move to an action plan to help them see results. Presents a performance improvement model and a systematic approach that considers organizational context, ownership,…
An Analysis of the Demand for and Value of Outdoor Recreation in the United States.
ERIC Educational Resources Information Center
Bergstrom, John C.; Cordell, H. Ken
1991-01-01
Results of a study of demand equations for 37 outdoor recreational activities using a multicommunity, multisite travel cost model suggest that determinants of the demand for outdoor recreation include population, residence, income, age, price, quality, and recreational opportunity substitutes. (JD)
An Analysis of Web Image Queries for Search.
ERIC Educational Resources Information Center
Pu, Hsiao-Tieh
2003-01-01
Examines the differences between Web image and textual queries, and attempts to develop an analytic model to investigate their implications for Web image retrieval systems. Provides results that give insight into Web image searching behavior and suggests implications for improvement of current Web image search engines. (AEF)
Early College High Schools: Model Policy Components. Policy Analysis
ERIC Educational Resources Information Center
Zinth, Jennifer
2016-01-01
An alarming convergence of factors--diminishing percentages of high school graduates enrolling immediately in postsecondary education, traditionally underserved students comprising a growing proportion of the overall U.S. school population, and projections that more occupations in the future will need education beyond high school--suggest that…
Affect in the "Communicative" Classroom: A Model.
ERIC Educational Resources Information Center
Acton, William
Recent research on affective variables and classroom second language learning suggests that: (1) affective variables are context-sensitive in at least two ways; (2) attitudes are contagious, and the general attitude of students can be influenced from various directions; (3) research in pragmatics, discourse analysis, and communicative functions…
Hypertension and cancer are prevalent diseases. Epidemiological studies suggest that hypertension may increase the long term risk of cancer. Identification of resistance and/or susceptibility genes using rodent models could provide important insights into the management and treat...
Self-Determination and U.S. Support of Insurgents: A Policy-Analysis Model
1988-01-01
parallels the European Convention model. The American Convention on Human Rights21 is the basis for an Inter -American human rights system which has its...fundamental change takes place among states and in the context of state-initiated action.42 Although they do not necessarily have the same capacities as...ability to adapt to the ever- changing circumstances in which conflicts arise. The global experience, Stone suggested, has proven that literal
NASA Astrophysics Data System (ADS)
Asai, Satoru; Hanyu, Isamu; Nunokawa, Mitsuji; Abe, Masayuki
1991-03-01
We studied the thermal effects in a positive photoresist during post exposure baking (PEB). Infrared analysis and the reduced dissolution rate in the exposed resist suggest that the carboxylic acid is decreased and/or that ECA solvent evaporates. In order to simulate the effects, we assume that the concentration of the alkali-soluble material (carboxylic acid) decreases equivalently. Our model explains PEB and enables its effects to be simulated.
ERIC Educational Resources Information Center
Heene, Moritz; Hilbert, Sven; Draxler, Clemens; Ziegler, Matthias; Buhner, Markus
2011-01-01
Fit indices are widely used in order to test the model fit for structural equation models. In a highly influential study, Hu and Bentler (1999) showed that certain cutoff values for these indices could be derived, which, over time, has led to the reification of these suggested thresholds as "golden rules" for establishing the fit or other aspects…
Pruchnicki, Shawn A; Wu, Lora J; Belenky, Gregory
2011-05-01
On 27 August 2006 at 0606 eastern daylight time (EDT) at Bluegrass Airport in Lexington, KY (LEX), the flight crew of Comair Flight 5191 inadvertently attempted to take off from a general aviation runway too short for their aircraft. The aircraft crashed killing 49 of the 50 people on board. To better understand this accident and to aid in preventing similar accidents, we applied mathematical modeling predicting fatigue-related degradation in performance for the Air Traffic Controller on-duty at the time of the crash. To provide the necessary input to the model, we attempted to estimate circadian phase and sleep/wake histories for the Captain, First Officer, and Air Traffic Controller. We were able to estimate with confidence the circadian phase for each. We were able to estimate with confidence the sleep/wake history for the Air Traffic Controller, but unable to do this for the Captain and First Officer. Using the sleep/wake history estimates for the Air Traffic Controller as input, the mathematical modeling predicted moderate fatigue-related performance degradation at the time of the crash. This prediction was supported by the presence of what appeared to be fatigue-related behaviors in the Air Traffic Controller during the 30 min prior to and in the minutes after the crash. Our modeling results do not definitively establish fatigue in the Air Traffic Controller as a cause of the accident, rather they suggest that had he been less fatigued he might have detected Comair Flight 5191's lining up on the wrong runway. We were not able to perform a similar analysis for the Captain and First Officer because we were not able to estimate with confidence their sleep/wake histories. Our estimates of sleep/wake history and circadian rhythm phase for the Air Traffic Controller might generalize to other air traffic controllers and to flight crew operating in the early morning hours at LEX. Relative to other times of day, the modeling results suggest an elevated risk of fatigue-related error, incident, or accident in the early morning due to truncated sleep from the early start and adverse circadian phase from the time of day. This in turn suggests that fatigue mitigation targeted to early morning starts might reduce fatigue risk. In summary, this study suggests that mathematical models predicting performance from sleep/wake history and circadian phase are (1) useful in retrospective accident analysis provided reliable sleep/wake histories are available for the accident personnel and, (2) useful in prospective fatigue-risk identification, mitigation, and accident prevention. Copyright © 2010 Elsevier Ltd. All rights reserved.
Case based reasoning in criminal intelligence using forensic case data.
Ribaux, O; Margot, P
2003-01-01
A model that is based on the knowledge of experienced investigators in the analysis of serial crime is suggested to bridge a gap between technology and methodology. Its purpose is to provide a solid methodology for the analysis of serial crimes that supports decision making in the deployment of resources, either by guiding proactive policing operations or helping the investigative process. Formalisation has helped to derive a computerised system that efficiently supports the reasoning processes in the analysis of serial crime. This novel approach fully integrates forensic science data.
Genetic and ecological insights into glacial refugia of walnut (Juglans regia L.)
Aradhya, Mallikarjuna; Ibrahimov, Zakir; Toktoraliev, Biimyrza; Maghradze, David; Musayev, Mirza; Bobokashvili, Zviadi; Preece, John E.
2017-01-01
The distribution and survival of trees during the last glacial maximum (LGM) has been of interest to paleoecologists, biogeographers, and geneticists. Ecological niche models that associate species occurrence and abundance with climatic variables are widely used to gain ecological and evolutionary insights and to predict species distributions over space and time. The present study deals with the glacial history of walnut to address questions related to past distributions through genetic analysis and ecological modeling of the present, LGM and Last Interglacial (LIG) periods. A maximum entropy method was used to project the current walnut distribution model on to the LGM (21–18 kyr BP) and LIG (130–116 kyr BP) climatic conditions. Model tuning identified the walnut data set filtered at 10 km spatial resolution as the best for modeling the current distribution and to hindcast past (LGM and LIG) distributions of walnut. The current distribution model predicted southern Caucasus, parts of West and Central Asia extending into South Asia encompassing northern Afghanistan, Pakistan, northwestern Himalayan region, and southwestern Tibet, as the favorable climatic niche matching the modern distribution of walnut. The hindcast of distributions suggested the occurrence of walnut during LGM was somewhat limited to southern latitudes from southern Caucasus, Central and South Asian regions extending into southwestern Tibet, northeastern India, Himalayan region of Sikkim and Bhutan, and southeastern China. Both CCSM and MIROC projections overlapped, except that MIROC projected a significant presence of walnut in the Balkan Peninsula during the LGM. In contrast, genetic analysis of the current walnut distribution suggested a much narrower area in northern Pakistan and the surrounding areas of Afghanistan, northwestern India, and southern Tajikistan as a plausible hotspot of diversity where walnut may have survived glaciations. Overall, the findings suggest that walnut perhaps survived the last glaciations in several refugia across a wide geographic area between 30° and 45° North latitude. However, humans probably played a significant role in the recent history and modern distribution of walnut. PMID:29023476
A Bayesian approach for temporally scaling climate for modeling ecological systems
Post van der Burg, Max; Anteau, Michael J.; McCauley, Lisa A.; Wiltermuth, Mark T.
2016-01-01
With climate change becoming more of concern, many ecologists are including climate variables in their system and statistical models. The Standardized Precipitation Evapotranspiration Index (SPEI) is a drought index that has potential advantages in modeling ecological response variables, including a flexible computation of the index over different timescales. However, little development has been made in terms of the choice of timescale for SPEI. We developed a Bayesian modeling approach for estimating the timescale for SPEI and demonstrated its use in modeling wetland hydrologic dynamics in two different eras (i.e., historical [pre-1970] and contemporary [post-2003]). Our goal was to determine whether differences in climate between the two eras could explain changes in the amount of water in wetlands. Our results showed that wetland water surface areas tended to be larger in wetter conditions, but also changed less in response to climate fluctuations in the contemporary era. We also found that the average timescale parameter was greater in the historical period, compared with the contemporary period. We were not able to determine whether this shift in timescale was due to a change in the timing of wet–dry periods or whether it was due to changes in the way wetlands responded to climate. Our results suggest that perhaps some interaction between climate and hydrologic response may be at work, and further analysis is needed to determine which has a stronger influence. Despite this, we suggest that our modeling approach enabled us to estimate the relevant timescale for SPEI and make inferences from those estimates. Likewise, our approach provides a mechanism for using prior information with future data to assess whether these patterns may continue over time. We suggest that ecologists consider using temporally scalable climate indices in conjunction with Bayesian analysis for assessing the role of climate in ecological systems.
Genetic and ecological insights into glacial refugia of walnut (Juglans regia L.).
Aradhya, Mallikarjuna; Velasco, Dianne; Ibrahimov, Zakir; Toktoraliev, Biimyrza; Maghradze, David; Musayev, Mirza; Bobokashvili, Zviadi; Preece, John E
2017-01-01
The distribution and survival of trees during the last glacial maximum (LGM) has been of interest to paleoecologists, biogeographers, and geneticists. Ecological niche models that associate species occurrence and abundance with climatic variables are widely used to gain ecological and evolutionary insights and to predict species distributions over space and time. The present study deals with the glacial history of walnut to address questions related to past distributions through genetic analysis and ecological modeling of the present, LGM and Last Interglacial (LIG) periods. A maximum entropy method was used to project the current walnut distribution model on to the LGM (21-18 kyr BP) and LIG (130-116 kyr BP) climatic conditions. Model tuning identified the walnut data set filtered at 10 km spatial resolution as the best for modeling the current distribution and to hindcast past (LGM and LIG) distributions of walnut. The current distribution model predicted southern Caucasus, parts of West and Central Asia extending into South Asia encompassing northern Afghanistan, Pakistan, northwestern Himalayan region, and southwestern Tibet, as the favorable climatic niche matching the modern distribution of walnut. The hindcast of distributions suggested the occurrence of walnut during LGM was somewhat limited to southern latitudes from southern Caucasus, Central and South Asian regions extending into southwestern Tibet, northeastern India, Himalayan region of Sikkim and Bhutan, and southeastern China. Both CCSM and MIROC projections overlapped, except that MIROC projected a significant presence of walnut in the Balkan Peninsula during the LGM. In contrast, genetic analysis of the current walnut distribution suggested a much narrower area in northern Pakistan and the surrounding areas of Afghanistan, northwestern India, and southern Tajikistan as a plausible hotspot of diversity where walnut may have survived glaciations. Overall, the findings suggest that walnut perhaps survived the last glaciations in several refugia across a wide geographic area between 30° and 45° North latitude. However, humans probably played a significant role in the recent history and modern distribution of walnut.
Mathematical analysis of a lymphatic filariasis model with quarantine and treatment.
Mwamtobe, Peter M; Simelane, Simphiwe M; Abelman, Shirley; Tchuenche, Jean M
2017-03-16
Lymphatic filariasis is a globally neglected tropical parasitic disease which affects individuals of all ages and leads to an altered lymphatic system and abnormal enlargement of body parts. A mathematical model of lymphatic filariaris with intervention strategies is developed and analyzed. Control of infections is analyzed within the model through medical treatment of infected-acute individuals and quarantine of infected-chronic individuals. We derive the effective reproduction number, [Formula: see text] and its interpretation/investigation suggests that treatment contributes to a reduction in lymphatic filariasis cases faster than quarantine. However, this reduction is greater when the two intervention approaches are applied concurrently. Numerical simulations are carried out to monitor the dynamics of the filariasis model sub-populations for various parameter values of the associated reproduction threshold. Lastly, sensitivity analysis on key parameters that drive the disease dynamics is performed in order to identify their relative importance on the disease transmission.
The Evolution of Human Handedness
McManus, I C; Davison, Angus; Armour, John A L
2013-01-01
Right- and left-handedness run in families, show greater concordance in monozygotic than dizygotic twins, and are well described by single-locus Mendelian models. Here we summarize a large genome-wide association study (GWAS) that finds no significant associations with handedness and is consistent with a meta-analysis of GWASs. The GWAS had 99% power to detect a single locus using the conventional criterion of P < 5 × 10−8 for the single locus models of McManus and Annett. The strong conclusion is that handedness is not controlled by a single genetic locus. A consideration of the genetic architecture of height, primary ciliary dyskinesia, and intelligence suggests that handedness inheritance can be explained by a multilocus variant of the McManus DC model, classical effects on family and twins being barely distinguishable from the single locus model. Based on the ENGAGE meta-analysis of GWASs, we estimate at least 40 loci are involved in determining handedness. PMID:23631511