Verus: A Tool for Quantitative Analysis of Finite-State Real-Time Systems.
1996-08-12
Symbolic model checking is a technique for verifying finite-state concurrent systems that has been extended to handle real - time systems . Models with...up to 10(exp 30) states can often be verified in minutes. In this paper, we present a new tool to analyze real - time systems , based on this technique...We have designed a language, called Verus, for the description of real - time systems . Such a description is compiled into a state-transition graph and
Measurements of postnatal growth of the skull of Pan troglodytes verus using lateral cephalograms.
Arnold, Wolfgang H; Protsch von Zieten, Reiner; Schmidt, Ekehard
2003-03-01
The postnatal growth of the viscerocranium in relation to the neurocranium of Pan troglodytes verus has been investigated using standardized lateral cephalograms. Sex and age were determined on the basis of cranial morphology and the skulls were divided into four age groups: infantile, juvenile, subadult and adult. The cephalograms were traced on transparencies and specific anatomical landmarks were identified for the measurement of lines angles and the area of the neurocranium and viscerocranium. The results showed that the skull of Pan troglodytes verus exhibits klinorhynchy. During postnatal growth it develops towards airorhynchy, but never shows true airorhynchy. In the infantile age group the measured area of the neurocranium is larger than that of the viscerocranium. The measured area of the viscerocranium increases until adulthood and is larger than that of the neurocranium in the subadult and adult group. From the results we conclude that in Pan troglodytes verus growth of the neurocranium seizes early in juvenile individuals, whereas the viscerocranium grows until adulthood. This may reflect an adaptation to the masticatory system.
Pruetz, Jill D; LaDuke, Thomas C
2010-04-01
The use and control of fire are uniquely human traits thought to have come about fairly late in the evolution of our lineage, and they are hypothesized to correlate with an increase in intellectual complexity. Given the relatively sophisticated cognitive abilities yet small brain size of living apes compared to humans and even early hominins, observations of wild chimpanzees' reactions to naturally occurring fire can help inform hypotheses about the likely responses of early hominins to fire. We use data on the behavior of savanna chimpanzees (Pan troglodytes verus) at Fongoli, Senegal during two encounters with wildfires to illuminate the similarities between great apes and humans regarding their reaction to fire. Chimpanzees' close relatedness to our lineage makes them phylogenetically relevant to the study of hominid evolution, and the open, hot and dry environment at Fongoli, similar to the savanna mosaic thought to characterize much of hominid evolution, makes these apes ecologically important as a living primate model as well. Chimpanzees at Fongoli calmly monitor wildfires and change their behavior in anticipation of the fire's movement. The ability to conceptualize the "behavior" of fire may be a synapomorphic trait characterizing the human-chimpanzee clade. If the cognitive underpinnings of fire conceptualization are a primitive hominid trait, hypotheses concerning the origins of the control and use of fire may need revision. We argue that our findings exemplify the importance of using living chimpanzees as models for better understanding human evolution despite recently published suggestions to the contrary. (c) 2009 Wiley-Liss, Inc.
Craniofacial variation and dietary adaptations of African colobines.
Koyabu, Daisuke B; Endo, Hideki
2009-06-01
African colobine monkeys show considerable craniofacial variation among species, although the evolutionary causes of this diversity are unclear. In light of growing evidence that diet varies considerably among colobine species, we investigated whether colobine craniofacial morphology varies as a function of their diet. We compared craniofacial morphology among five African species: Colobus angolensis, C. guereza, C. polykomos, Piliocolobus badius, and P. verus. Matrix correlation analysis indicated a significant correlation between species-specific morphological distance and dietary distance matrices. The mechanical advantage of the masseter muscle was higher in seed-eaters (C. angolensis and C. polykomos) and lower in those that eat mainly young leaves (C. guereza, P. badius, and P. verus). Canonical correspondence analysis revealed that the durophagous colobines possess relatively wider bigonial breadths, anteroposteriorly shorter faces, shorter postcanine tooth rows, more medially positioned dental batteries, wider bizygomatic arches, and anteroposteriorly longer zygomatic arches. Under the constrained lever model, these morphological features suggest that durophagous colobines have the capacity to generate relatively greater maximum bite forces. However, no consistent relationship was observed between diet and variation in the mandibular corpus and symphysis, implying that robust mandibles are not necessarily adaptations for stress resistance. Factors that may influence mandibular robusticity include allometry of symphyseal curvature and canine tooth support. Finally, linear measures of mandibular robusticity may suffer from error.
Ely, John J; Dye, Brent; Frels, William I; Fritz, Jo; Gagneux, Pascal; Khun, Henry H; Switzer, William M; Lee, D Rick
2005-10-01
Chimpanzees are presently classified into three subspecies: Pan troglodytes verus from west Africa, P.t. troglodytes from central Africa, and P.t. schweinfurthii from east Africa. A fourth subspecies (P.t. vellerosus), from Cameroon and northern Nigeria, has been proposed. These taxonomic designations are based on geographical origins and are reflected in sequence variation in the first hypervariable region (HVR-I) of the mtDNA D-loop. Although advances have been made in our understanding of chimpanzee phylogenetics, little has been known regarding the subspecies composition of captive chimpanzees. We sequenced part of the mtDNA HVR-I region in 218 African-born population founders and performed a phylogenetic analysis with previously characterized African sequences of known provenance to infer subspecies affiliations. Most founders were P.t. verus (95.0%), distantly followed by the troglodytes schweinfurthii clade (4.6%), and a single P.t. vellerosus (0.4%). Pedigree-based estimates of genomic representation in the descendant population revealed that troglodytes schweinfurthii founder representation was reduced in captivity, vellerosus representation increased due to prolific breeding by a single male, and reproductive variance resulted in uneven representation among male P.t.verus founders. No increase in mortality was evident from between-subspecies interbreeding, indicating a lack of outbreeding depression. Knowledge of subspecies and their genomic representation can form the basis for phylogenetically informed genetic management of extant chimpanzees to preserve rare genetic variation for research, conservation, or possible future breeding. Copyright 2005 Wiley-Liss, Inc.
Rend Lake, Illinois (Operation and Maintenance).
1976-12-01
Lippia lanc6Tahta Foxtail, Bristly Setaria Faberfi Foxtail, Green Setaria viridis Foxtail, Yellow Setaria lutescens Galingale C Verus strigosus Garlic...Stickseed, Virginian Hackelia virginiana Stink-Grass Lraqrostis cilialensis Strawberry Fragaria virginiana Sunflower, Common Helianthus annuus Swamp
Ferreira, Zélia; Hurle, Belen; Andrés, Aida M.; Kretzschmar, Warren W.; Mullikin, James C.; Cherukuri, Praveen F.; Cruz, Pedro; Gonder, Mary Katherine; Stone, Anne C.; Tishkoff, Sarah; Swanson, Willie J.; Green, Eric D.; Clark, Andrew G.; Seixas, Susana
2013-01-01
Recent efforts have attempted to describe the population structure of common chimpanzee, focusing on four subspecies: Pan troglodytes verus, P. t. ellioti, P. t. troglodytes, and P. t. schweinfurthii. However, few studies have pursued the effects of natural selection in shaping their response to pathogens and reproduction. Whey acidic protein (WAP) four-disulfide core domain (WFDC) genes and neighboring semenogelin (SEMG) genes encode proteins with combined roles in immunity and fertility. They display a strikingly high rate of amino acid replacement (dN/dS), indicative of adaptive pressures during primate evolution. In human populations, three signals of selection at the WFDC locus were described, possibly influencing the proteolytic profile and antimicrobial activities of the male reproductive tract. To evaluate the patterns of genomic variation and selection at the WFDC locus in chimpanzees, we sequenced 17 WFDC genes and 47 autosomal pseudogenes in 68 chimpanzees (15 P. t. troglodytes, 22 P. t. verus, and 31 P. t. ellioti). We found a clear differentiation of P. t. verus and estimated the divergence of P. t. troglodytes and P. t. ellioti subspecies in 0.173 Myr; further, at the WFDC locus we identified a signature of strong selective constraints common to the three subspecies in WFDC6—a recent paralog of the epididymal protease inhibitor EPPIN. Overall, chimpanzees and humans do not display similar footprints of selection across the WFDC locus, possibly due to different selective pressures between the two species related to immune response and reproductive biology. PMID:24356879
Pruetz, J D; Bertolani, P; Ontl, K Boyer; Lindshield, S; Shelley, M; Wessling, E G
2015-04-01
For anthropologists, meat eating by primates like chimpanzees (Pan troglodytes) warrants examination given the emphasis on hunting in human evolutionary history. As referential models, apes provide insight into the evolution of hominin hunting, given their phylogenetic relatedness and challenges reconstructing extinct hominin behaviour from palaeoanthropological evidence. Among chimpanzees, adult males are usually the main hunters, capturing vertebrate prey by hand. Savannah chimpanzees (P. t. verus) at Fongoli, Sénégal are the only known non-human population that systematically hunts vertebrate prey with tools, making them an important source for hypotheses of early hominin behaviour based on analogy. Here, we test the hypothesis that sex and age patterns in tool-assisted hunting (n=308 cases) at Fongoli occur and differ from chimpanzees elsewhere, and we compare tool-assisted hunting to the overall hunting pattern. Males accounted for 70% of all captures but hunted with tools less than expected based on their representation on hunting days. Females accounted for most tool-assisted hunting. We propose that social tolerance at Fongoli, along with the tool-assisted hunting method, permits individuals other than adult males to capture and retain control of prey, which is uncommon for chimpanzees. We assert that tool-assisted hunting could have similarly been important for early hominins.
Liégeois, Florian; Lafay, Bénédicte; Formenty, Pierre; Locatelli, Sabrina; Courgnaud, Valérie; Delaporte, Eric; Peeters, Martine
2009-01-01
Simian immunodeficiency viruses (SIVs) are found in an extensive number of African primates and humans continue to be exposed to these viruses by hunting and handling of primate bushmeat. Full-length genome sequences were obtained from SIVs derived from two Colobinae species inhabiting the Taï forest, Ivory Coast, each belonging to a different genus: SIVwrc from western red colobus (Piliocolobus badius badius) (SIVwrcPbb-98CI04 and SIVwrcPbb-97CI14) and SIVolc (SIVolc-97CI12) from olive colobus (Procolobus verus). Phylogenetic analysis showed that western red colobus are the natural hosts of SIVwrc, and SIVolc is also a distinct species-specific lineage, although distantly related to the SIVwrc lineage across the entire length of its genome. Overall, both SIVwrc and SIVolc, are also distantly related to the SIVlho/sun lineage across the whole genome. Similar to the group of SIVs (SIVsyk, SIVdeb, SIVden, SIVgsn, SIVmus, and SIVmon) infecting members of the Cercopithecus genus, SIVs derived from western red and olive colobus, L'Hoest and suntailed monkeys, and SIVmnd-1 from mandrills form a second group of viruses that cluster consistently together in phylogenetic trees. Interestingly, the divergent SIVcol lineage, from mantled guerezas (Colobus guereza) in Cameroon, is also closely related to SIVwrc, SIVolc, and the SIVlho/sun lineage in the 5′ part of Pol. Overall, these results suggest an ancestral link between these different lentiviruses and highlight once more the complexity of the natural history and evolution of primate lentiviruses. PMID:18922864
Courgnaud, Valerie; Formenty, Pierre; Akoua-Koffi, Chantal; Noe, Ronald; Boesch, Christophe; Delaporte, Eric; Peeters, Martine
2003-01-01
In order to study primate lentivirus evolution in the Colobinae subfamily, in which only one simian immunodeficiency virus (SIV) has been described to date, we screened additional species from the three different genera of African colobus monkeys for SIV infection. Blood was obtained from 13 West African colobids, and HIV cross-reactive antibodies were observed in 5 of 10 Piliocolobus badius, 1 of 2 Procolobus verus, and 0 of 1 Colobus polykomos specimens. Phylogenetic analyses of partial pol sequences revealed that the new SIVs were more closely related to each other than to the other SIVs and especially did not cluster with the previously described SIVcol from Colobus guereza. This study presents evidence that the three genera of African colobus monkeys are naturally infected with an SIV and indicates also that there was no coevolution between virus and hosts at the level of the Colobinae subfamily. PMID:12477880
Halloran, Andrew R; Cloutier, Christina T; Sesay, Papanie Bai
2013-06-01
A previously undocumented group of wild chimpanzees (Pan troglodytes verus) was recently discovered along the Pampana River in the Tonkolili District of Sierra Leone. Based on interviews from local residents (N = 6), we estimate the group size to be approximately 30 individuals. Though this population does not show up in the most recent census of chimpanzees in Sierra Leone, it concurs with findings that indicate most of the chimpanzees in Sierra Leone live scattered throughout the country alongside villages, rather than in protected areas. During a three-week observation in the area, two chimpanzees were hunted and killed. The reason for these deaths, along with other reported instances of hunting in the area, are primarily due to crop-raiding and competition for resources between chimpanzees and humans. We conclude that this is a heavily imperiled population. Based on the ecology of the area and composition of local villages, we propose a number of conservation strategies that will promote a symbiotic relationship between the chimpanzees and human populations residing in the area. © 2013 Wiley Periodicals, Inc.
Placentophagy in wild chimpanzees (Pan troglodytes verus) at Bossou, Guinea.
Fujisawa, Michiko; Hockings, Kimberley J; Soumah, Aly Gaspard; Matsuzawa, Tetsuro
2016-04-01
Despite intensive observation of nonhuman great apes during long-term field studies, observations of great ape births in the wild are rare. Research on wild chimpanzees (Pan troglodytes verus) at Bossou in the Republic of Guinea has been ongoing for 35 years, yet chimpanzee parturitions have been observed on only two occasions. Here we provide information regarding both chimpanzee births, with detailed information from the close observation of one. During this birth, the mother built a day nest in a tree before parturition. After giving birth, the mother consumed the placenta, and the other chimpanzees in her party gathered near her and her neonate. However, she did not share the placenta, and consumed it all herself. In the second observation, the mother also built a nest in a tree and subsequently gave birth. Thereafter, she shared the placenta with some individuals and consumed part of the placenta herself. Although maternal placentophagy is a ubiquitous behavior among the majority of non-human primates, observations of placenta sharing by wild primates are infrequent, and the proximate and ultimate explanations for the behavior remain unclear.
Why Are Nigeria-Cameroon Chimpanzees (Pan troglodytes ellioti) Free of SIVcpz Infection?
Locatelli, Sabrina; Harrigan, Ryan J; Sesink Clee, Paul R; Mitchell, Matthew W; McKean, Kurt A; Smith, Thomas B; Gonder, Mary Katherine
2016-01-01
Simian immunodeficiency virus (SIV) naturally infects two subspecies of chimpanzee: Pan troglodytes troglodytes from Central Africa (SIVcpzPtt) and P. t. schweinfurtii from East Africa (SIVcpzPts), but is absent in P. t. verus from West Africa and appears to be absent in P. t. ellioti inhabiting Nigeria and western Cameroon. One explanation for this pattern is that P. t. troglodytes and P. t schweinfurthii may have acquired SIVcpz after their divergence from P. t. verus and P. t. ellioti. However, all of the subspecies, except P. t. verus, still occasionally exchange migrants making the absence of SIVcpz in P. t. ellioti puzzling. Sampling of P. t. ellioti has been minimal to date, particularly along the banks of the Sanaga River, where its range abuts that of P. t. troglodytes. This study had three objectives. First, we extended the sampling of SIVcpz across the range of chimpanzees north of the Sanaga River to address whether under-sampling might account for the absence of evidence for SIVcpz infection in P. t. ellioti. Second, we investigated how environmental variation is associated with the spread and prevalence of SIVcpz in the two chimpanzee subspecies inhabiting Cameroon since environmental variation has been shown to contribute to their divergence from one another. Finally, we compared the prevalence and distribution of SIVcpz with that of Simian Foamy Virus (SFV) to examine the role of ecology and behavior in shaping the distribution of diseases in wild host populations. The dataset includes previously published results on SIVcpz infection and SFVcpz as well as newly collected data, and represents over 1000 chimpanzee fecal samples from 41 locations across Cameroon. Results revealed that none of the 181 P. t. ellioti fecal samples collected across the range of P. t. ellioti tested positive for SIVcpz. In addition, species distribution models suggest that environmental variation contributes to differences in the distribution and prevalence of SIVcpz and SFVcpz. The ecological niches of these two viruses are largely non-overlapping, although stronger statistical support for this conclusion will require more sampling. Overall this study demonstrates that SIVcpz infection is absent or very rare in P. t. ellioti, despite multiple opportunities for transmission. The reasons for its absence remain unclear, but might be explained by one or more factors, including environmental variation, viral competition, and/or local adaptation-all of which should be explored in greater detail through continued surveillance of this region.
Why Are Nigeria-Cameroon Chimpanzees (Pan troglodytes ellioti) Free of SIVcpz Infection?
Locatelli, Sabrina; Harrigan, Ryan J.; Sesink Clee, Paul R.; Mitchell, Matthew W; McKean, Kurt A.; Smith, Thomas B.; Gonder, Mary Katherine
2016-01-01
Simian immunodeficiency virus (SIV) naturally infects two subspecies of chimpanzee: Pan troglodytes troglodytes from Central Africa (SIVcpzPtt) and P. t. schweinfurtii from East Africa (SIVcpzPts), but is absent in P. t. verus from West Africa and appears to be absent in P. t. ellioti inhabiting Nigeria and western Cameroon. One explanation for this pattern is that P. t. troglodytes and P. t schweinfurthii may have acquired SIVcpz after their divergence from P. t. verus and P. t. ellioti. However, all of the subspecies, except P. t. verus, still occasionally exchange migrants making the absence of SIVcpz in P. t. ellioti puzzling. Sampling of P. t. ellioti has been minimal to date, particularly along the banks of the Sanaga River, where its range abuts that of P. t. troglodytes. This study had three objectives. First, we extended the sampling of SIVcpz across the range of chimpanzees north of the Sanaga River to address whether under-sampling might account for the absence of evidence for SIVcpz infection in P. t. ellioti. Second, we investigated how environmental variation is associated with the spread and prevalence of SIVcpz in the two chimpanzee subspecies inhabiting Cameroon since environmental variation has been shown to contribute to their divergence from one another. Finally, we compared the prevalence and distribution of SIVcpz with that of Simian Foamy Virus (SFV) to examine the role of ecology and behavior in shaping the distribution of diseases in wild host populations. The dataset includes previously published results on SIVcpz infection and SFVcpz as well as newly collected data, and represents over 1000 chimpanzee fecal samples from 41 locations across Cameroon. Results revealed that none of the 181 P. t. ellioti fecal samples collected across the range of P. t. ellioti tested positive for SIVcpz. In addition, species distribution models suggest that environmental variation contributes to differences in the distribution and prevalence of SIVcpz and SFVcpz. The ecological niches of these two viruses are largely non-overlapping, although stronger statistical support for this conclusion will require more sampling. Overall this study demonstrates that SIVcpz infection is absent or very rare in P. t. ellioti, despite multiple opportunities for transmission. The reasons for its absence remain unclear, but might be explained by one or more factors, including environmental variation, viral competition, and/or local adaptation—all of which should be explored in greater detail through continued surveillance of this region. PMID:27505066
Function of the Alpha6 in Breast Carcinoma
1999-10-01
Miao , J., S. Araki, K. Kaji, and H . Hayashi. 1997. Integrin P34 is involved in apoptotic signal transduction in endothelial cells. Biochem. and Biophys...cadherin expression in neoplasatic verus normal thyroid tissue. J. Natl. Cancer Inst. 88:442-449. 6. Shaw, L. M., I. Rabinovitz, H . H .-F. Wang, A. Toker, and...Integrin and cAMP Metabolism Kathleen L. O’Connor, Bao-Kim Nguyen and Arthur M. Mercurio Department of Medicine, Beth Israel Deaconess Medical Center and
Carvalho, Joana S; Meyer, Christoph F J; Vicente, Luis; Marques, Tiago A
2015-02-01
Conversion of forests to anthropogenic land-uses increasingly subjects chimpanzee populations to habitat changes and concomitant alterations in the plant resources available to them for nesting and feeding. Based on nest count surveys conducted during the dry season, we investigated nest tree species selection and the effect of vegetation attributes on nest abundance of the western chimpanzee, Pan troglodytes verus, at Lagoas de Cufada Natural Park (LCNP), Guinea-Bissau, a forest-savannah mosaic widely disturbed by humans. Further, we assessed patterns of nest height distribution to determine support for the anti-predator hypothesis. A zero-altered generalized linear mixed model showed that nest abundance was negatively related to floristic diversity (exponential form of the Shannon index) and positively with the availability of smaller-sized trees, reflecting characteristics of dense-canopy forest. A positive correlation between nest abundance and floristic richness (number of plant species) and composition indicated that species-rich open habitats are also important in nest site selection. Restricting this analysis to feeding trees, nest abundance was again positively associated with the availability of smaller-sized trees, further supporting the preference for nesting in food tree species from dense forest. Nest tree species selection was non-random, and oil palms were used at a much lower proportion (10%) than previously reported from other study sites in forest-savannah mosaics. While this study suggests that human disturbance may underlie the exclusive arboreal nesting at LCNP, better quantitative data are needed to determine to what extent the construction of elevated nests is in fact a response to predators able to climb trees. Given the importance of LCNP as refuge for Pan t. verus our findings can improve conservation decisions for the management of this important umbrella species as well as its remaining suitable habitats. © 2014 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
1978-01-01
The feasibility of the commercial manufacturing of pharmaceuticals in space is examined. The method of obtaining pharmaceutical company involvement, laboratory results of the separation of serum proteins by the continuous flow electrophoresis process, the selection and study of candidate products, and their production requirements is presented. Antihemophilic factor, beta cells, erythropoietin, epidermal growth factor, alpha-1-antitrypsin and interferon were studied. Production mass balances for antihemophilic factor, beta cells, and erythropoietin were compared for space verus ground operation.
Savanna chimpanzee (Pan troglodytes verus) nesting ecology at Bagnomba (Kedougou, Senegal).
Badji, L; Ndiaye, P I; Lindshield, S M; Ba, C T; Pruetz, J D
2018-05-01
We studied the nesting behavior of the critically endangered West African chimpanzee (Pan troglodytes verus). We assumed that the nesting data stemmed from a single, unhabituated community at the Bagnomba hill site in the savanna-woodlands of southeastern Senegal. The aim of this study was to examine chimpanzees' nesting habits in terms of the tree species utilized and sleeping nest heights. We recorded a total of 550 chimpanzee nests at Bagnomba between January 2015 and December 2015. The chimpanzees here made nests in particular tree species more often than others. The majority of nests (63%) were in two tree species: Diospyros mespiliformis and Pterocarpus erinaceus. The average height of nesting trees was 10.54 m (SD 3.91, range, 0.0-29.0 m) and average nest height was 7.90 m (SD 3.62, range, 0.0-25.0 m). The result of a linear regression analysis (r = 0.7874; n = 550; p < 0.05) is consistent with a preference for nesting at a particular height. Bagnomba chimpanzees rarely made ground nests (0.36% of nests), but the presence of any ground nesting was unexpected, given that at least one leopard (Panthera pardus) also occupied the hill. This knowledge will enable stakeholders involved in the protection of chimpanzees specifically and of biodiversity in general to better understand chimpanzee ecology and inform a conservation action plan in Senegal where the survival of this species is threatened.
Fahy, Geraldine E; Boesch, Christophe; Hublin, Jean-Jacques; Richards, Michael P
2015-11-01
Changes in diet throughout hominin evolution have been linked with important evolutionary changes. Stable carbon isotope analysis of inorganic apatite carbonate is the main isotopic method used to reconstruct fossil hominin diets; to test its effectiveness as a paleodietary indicator we present bone and enamel carbonate carbon isotope data from a well-studied population of modern wild western chimpanzees (Pan troglodytes verus) of known sex and age from Taï, Cote d'Ivoire. We found a significant effect of age class on bone carbonate values, with adult chimpanzees being more (13)C- and (18)O-depleted compared to juveniles. Further, to investigate habitat effects, we compared our data to existing apatite data on eastern chimpanzees (P. troglodytes schweinfurthii) and found that the Taï chimpanzees are significantly more depleted in enamel δ(13)Cap and δ(18)Oap compared to their eastern counterparts. Our data are the first to present a range of tissue-specific isotope data from the same group of wild western chimpanzees and, as such, add new data to the growing number of modern non-human primate comparative isotope datasets providing valuable information for the interpretation of diet throughout hominin evolution. By comparing our data to published isotope data on fossil hominins we found that our modern chimpanzee bone and enamel data support hypotheses that the trend towards increased consumption of C4 foods after 4 Ma (millions of years ago) is unique to hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bessa, Joana; Sousa, Cláudia; Hockings, Kimberley J
2015-06-01
With rising conversion of "natural" habitat to other land use such as agriculture, nonhuman primates are increasingly exploiting areas influenced by people and their activities. Despite the conservation importance of understanding the ways in which primates modify their behavior to human pressures, data are lacking, even for well-studied species. Using systematically collected data (fecal samples, feeding traces, and direct observations), we examined the diet and feeding strategies of an unhabituated chimpanzee community (Pan troglodytes verus) at Caiquene-Cadique in Guinea-Bissau that inhabit a forest-savanna-mangrove-agricultural mosaic. The chimpanzees experienced marked seasonal variations in the availability of plant foods, but maintained a high proportion of ripe fruit in the diet across months. Certain wild species were identified as important to this community including oil-palm (Elaeis guineensis) fruit and flower. Honey was frequently consumed but no other insects or vertebrates were confirmed to be eaten by this community. However, we provide indirect evidence of possible smashing and consumption of giant African snails (Achatina sp.) by chimpanzees at this site. Caiquene-Cadique chimpanzees were confirmed to feed on nine different agricultural crops, which represented 13.6% of all plant species consumed. Consumption of fruit and nonfruit crops was regular, but did not increase during periods of wild fruit scarcity. Crop consumption is an increasing and potentially problematic behavior, which can impact local people's tolerance toward wildlife. To maximize the potential success of any human-wildlife coexistence strategy (e.g., to reduce primate crop feeding), knowledge of primate behavior, as well as multifaceted social dimensions of interactions, is critical. © 2015 Wiley Periodicals, Inc.
Lindshield, Stacy; Danielson, Brent J; Rothman, Jessica M; Pruetz, Jill D
2017-07-01
We evaluated risk-sensitive foraging in adult male western chimpanzees (Pan troglodytes verus) occupying a savanna environment at Fongoli, Senegal. The aim of this study was to determine how the risks of predation and heat stress influenced their behavior while feeding on a key food, fruit of the baobab tree (Adansonia digitata). Proximity of fruiting baobab trees to anthropogenic landmarks were compared to food intake, feeding rate, and behavioral indicators of fear in adult males (N = 11) at Fongoli. Additionally, we compared foraging to vegetative habitats, baobab ripe fruit nutritive quality, surface water availability, and foraging party composition. Fruit abundance increased with proximity to anthropogenic landmarks, and chimpanzees exhibited higher frequencies of antipredator behaviors as they approached these risky areas. However, predation risk did not deter adult males from visiting these fruiting trees; instead, risky foraging bouts were associated with higher food intakes and longer feeding times. Additionally, higher feeding rates were observed in open-canopy habitats, and this behavior may have minimized their risk of heat stress. Adaptations that minimize predation risk are widespread in mammalian prey species, but these traits are poorly understood in chimpanzees. Great apes encounter few nonhuman predators capable of successfully capturing and killing them; thus, such events are rarely observed. Although people rarely hunt chimpanzees in Senegal, we found that adult males perceived humans as predators and adjusted their behavior while foraging in risky habitats. From an applied perspective, risk-taking behavior is important for understanding and mitigating the problem of crop-feeding in locations where chimpanzees and humans live in sympatry. © 2017 Wiley Periodicals, Inc.
2011-01-01
Background Simian Immunodeficiency Viruses (SIVs) are the precursors of Human Immunodeficiency Viruses (HIVs) which have lead to the worldwide HIV/AIDS pandemic. By studying SIVs in wild primates we can better understand the circulation of these viruses in their natural hosts and habitat, and perhaps identify factors that influence susceptibility and transmission within and between various host species. We investigated the SIV status of wild West African chimpanzees (Pan troglodytes verus) which frequently hunt and consume the western red colobus monkey (Piliocolobus badius badius), a species known to be infected to a high percentage with its specific SIV strain (SIVwrc). Results Blood and plasma samples from 32 wild chimpanzees were tested with INNO-LIA HIV I/II Score kit to detect cross-reactive antibodies to HIV antigens. Twenty-three of the samples were also tested for antibodies to 43 specific SIV and HIV lineages, including SIVwrc. Tissue samples from all but two chimpanzees were tested for SIV by PCRs using generic SIV primers that detect all known primate lentiviruses as well as primers designed to specifically detect SIVwrc. Seventeen of the chimpanzees showed varying degrees of cross-reactivity to the HIV specific antigens in the INNO-LIA test; however no sample had antibodies to SIV or HIV strain - and lineage specific antigens in the Luminex test. No SIV DNA was found in any of the samples. Conclusions We could not detect any conclusive trace of SIV infection from the red colobus monkeys in the chimpanzees, despite high exposure to this virus through frequent hunting. The results of our study raise interesting questions regarding the host-parasite relationship of SIVwrc and wild chimpanzees in their natural habitat. PMID:21284842
Does the "office nurse" level of training matter in the family medicine office?
Erickson, Rodney A; Erickson, Richard A; Targonski, Paul V; Cox, Stephen B; Deming, James R; Mold, James W
2012-01-01
The "office nurse" or clinical associate (registered nurse [RN], licensed practical nurse[LPN], or medical assistant [MA]) is a key member of the family medicine care team, but little is known about the influence of their level of training on team performance. The performance of the clinical dyad (clinician and associate) was studied in relation to the level of training of the nurse. The dyad's performance was measured by the performance indicators of diabetes scores, patient satisfaction, and productivity. Dyads with a RN scored higher in meeting all 5 of the diabetes quality indicators (27.8%) than those with a LPN (19.3%) or an MA (14.7%). For patient satisfaction, the RN dyads also scored higher than the other dyad groups (positive responses: RN, 96.8%; LPN, 95.5%; MA, 94.6%). Productivity was the same in all groups. Better diabetes performance was seen in those practices with fewer competing demands: nonrural versus rural (22.2% vs 15.1%, respectively), and those not doing obstetrics versus those doing obstetrics (20.3% vs 15.1%, respectively), and for physicians versus associate providers (18.8% vs 15.1%, respectively). Higher patient satisfaction was observed in those dyads who were nonrural verus rural (96.6 vs 94.1%), among those doing obstetrics (96.0% vs 94.9%), and in physicians verus associate providers (95.7% vs 93.2%). The number of years working with the same clinician was twice as high for RNs (6.63) and LPNs (6.57) than for MAs (3.29). A higher level of education of the clinical associate seems to confer skills that enhance the care team's management of chronic illness such as diabetes. This could potentially decrease the practice burden on other team members while facilitating the team's objectives in meeting quality indicators.
Bayesian Analysis of Evolutionary Divergence with Genomic Data under Diverse Demographic Models.
Chung, Yujin; Hey, Jody
2017-06-01
We present a new Bayesian method for estimating demographic and phylogenetic history using population genomic data. Several key innovations are introduced that allow the study of diverse models within an Isolation-with-Migration framework. The new method implements a 2-step analysis, with an initial Markov chain Monte Carlo (MCMC) phase that samples simple coalescent trees, followed by the calculation of the joint posterior density for the parameters of a demographic model. In step 1, the MCMC sampling phase, the method uses a reduced state space, consisting of coalescent trees without migration paths, and a simple importance sampling distribution without the demography of interest. Once obtained, a single sample of trees can be used in step 2 to calculate the joint posterior density for model parameters under multiple diverse demographic models, without having to repeat MCMC runs. Because migration paths are not included in the state space of the MCMC phase, but rather are handled by analytic integration in step 2 of the analysis, the method is scalable to a large number of loci with excellent MCMC mixing properties. With an implementation of the new method in the computer program MIST, we demonstrate the method's accuracy, scalability, and other advantages using simulated data and DNA sequences of two common chimpanzee subspecies: Pan troglodytes (P. t.) troglodytes and P. t. verus. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Eckhardt, Nadin; Polansky, Leo; Boesch, Christophe
2015-02-01
Group living animals can exhibit fission-fusion behavior whereby individuals temporarily separate to reduce the costs of living in large groups. Primates living in groups with fission-fusion dynamics face numerous challenges in maintaining spatial cohesion, especially in environments with limited visibility. Here we investigated the spatial cohesion of adult male chimpanzees (Pan troglodytes verus) living in Taï National Park, Côte d'Ivoire, to better understand the mechanisms by which individuals maintain group cohesion during fission-fusion events. Over a 3-year period, we simultaneously tracked the movements of 2-4 males for 4-12 hr on up to 12 consecutive days using handheld GPS devices that recorded locations at one-minute intervals. Analyses of the male's inter-individual distance (IID) showed that the maximum, median, and mean IID values across all observations were 7.2 km, 73 m, and 483 m, respectively. These males (a) had maximum daily IID values below the limits of auditory communication (<1 km) for 63% of the observation time, (b) remained out of visual range (≥100 m) for 46% of observation time, and (c) remained within auditory range for 70% of the time when they were in different parties. We compared the observed distribution of IIDs with a random distribution obtained from permutations of the individuals' travel paths using Kolmogorov-Smirnov tests. Observation IID values were significantly smaller than those generated by the permutation procedure. We conclude that these male chimpanzees actively maintain cohesion when out of sight, and that auditory communication is one likely mechanism by which they do so. We discuss mechanisms by which chimpanzees may maintain the level of cohesion observed. This study provides a first analysis of spatial group cohesion over large distances in forest chimpanzees using high-resolution tracking, and illustrates the utility of such data for quantifying socio-ecological processes in primate ecology. © 2014 Wiley Periodicals, Inc.
Howells, Michaela E; Pruetz, Jill; Gillespie, Thomas R
2011-02-01
The exponential decline of great apes over the past 50 years has resulted in an urgent need for data to inform population viability assessment and conservation strategies. Health monitoring of remaining ape populations is an important component of this process. In support of this effort, we examined endoparasitic and commensal prevalence and richness as proxies of population health for western chimpanzees (Pan troglodytes verus) and sympatric guinea baboons (Papio hamadryas papio) at Fongoli, Senegal, a site dominated by woodland-savanna at the northwestern extent of chimpanzees' geographic range. The small population size and extreme environmental pressures experienced by Fongoli chimpanzees make them particularly sensitive to the potential impact of pathogens. One hundred thirty-two chimpanzee and seventeen baboon fecal samples were processed using sodium nitrate floatation and fecal sedimentation to isolate helminth eggs, larvae, and protozoal cysts. Six nematodes (Physaloptera sp., Ascaris sp., Stronglyloides fuelleborni, Trichuris sp., an unidentified hookworm, and an unidentified larvated nematode), one cestode (Bertiella sp.), and five protozoans (Iodamoeba buetschlii, Entamoeba coli, Troglodytella abrassarti, Troglocorys cava, and an unidentified ciliate) were detected in chimpanzee fecal samples. Four nematodes (Necator sp., S. fuelleborni, Trichuris sp., and an unidentified hookworm sp.), two trematodes (Shistosoma mansoni and an unidentified fluke), and six protozoans (Entamoeba histolytica/dispar, E. coli, Chilomastix mesnili, Balantidium coli, T. abrassarti, and T. cava) were detected in baboon fecal samples. The low prevalence of pathogenic parasite species and high prevalence of symbiotic protozoa in Fongoli chimpanzees are indicative of good overall population health. However, the high prevalence of pathogenic parasites in baboons, who may serve as transport hosts, highlight the need for ongoing pathogen surveillance of the Fongoli chimpanzee population and point to the need for further research into the epidemiology and cross-species transmission ecology of zoonotic pathogens at this site. © 2010 Wiley-Liss, Inc.
Patterns of mandibular variation in Pan and Gorilla and implications for African ape taxonomy.
Taylor, Andrea B; Groves, Colin P
2003-05-01
Pan and Gorilla taxonomy is currently in a state of flux, with the number of existing species and subspecies of common chimpanzee and gorilla having been recently challenged. While Pan and Gorilla systematics have been evaluated on the basis of craniometric and odontometric data, only a handful of studies have evaluated multivariate craniometric variation within P. troglodytes, and none have evaluated in detail mandibular variation in either P. troglodytes or Gorilla gorilla. In this paper, we examine ontogenetic and adult mandibular variation in Pan and Gorilla. We test the hypothesis that patterns and degrees of mandibular variation in Pan and Gorilla closely correspond to those derived from previous analyses of craniometric variation. We then use these data to address some current issues surrounding Pan and Gorilla taxonomy. Specifically, we evaluate the purported distinctiveness of P.t. verus from the other two subspecies of Pan troglodytes, and the recent proposals to recognize Nigerian gorillas as a distinct subspecies, Gorilla gorilla diehli, and to acknowledge mountain and lowland gorillas as two separate species. Overall, patterns and degrees of multivariate mandibular differentiation parallel those obtained previously for the cranium and dentition. Thus, differences among the three conventionally recognized gorilla subspecies are somewhat greater than among subspecies of common chimpanzees, but differences between P. paniscus and P. troglodytes are greater than those observed between any gorilla subspecies. In this regard, the mandible does not appear to be more variable, or of less taxonomic value, than the face and other parts of the cranium. There are, however, some finer differences in the pattern and degree of morphological differentiation in Pan and Gorilla, both with respect to cranial and dental morphology, and in terms of the application and manner of size adjustment. Mandibular differentiation supports the conventional separation of bonobos from chimpanzees regardless of size adjustment, but size correction alters the relative alignment of taxa. Following size correction, intergroup distances are greatest between P.t. verus and all other groups, but there is considerable overlap amongst chimpanzee subspecies. Amongst gorillas, the greatest separation is between eastern and western gorillas, but adjustment relative to palatal vs. basicranial length results in a greater accuracy of group classification for G.g. gorilla and G.g. graueri, and more equivalent intergroup distances amongst all gorilla groups. We find no multivariate differentiation of the Nigerian gorillas based on mandibular morphology, suggesting that the primary difference between Nigerian and other western lowland gorillas lies in the nuchal region. Though intergroup distances are greatest between P.t. verus and other chimpanzee subspecies, the degree of overlap amongst all three groups does not indicate a markedly greater degree of distinction in mandibular, as opposed to other morphologies. Finally, mandibular differentiation corroborates previous craniodental studies indicating the greatest distinction amongst gorillas is between eastern and western groups. Thus, patterns and degrees of mandibular variation are in agreement with other kinds of data that have been used to diagnose eastern and western gorillas as separate species.
Persistent anthrax as a major driver of wildlife mortality in a tropical rainforest
NASA Astrophysics Data System (ADS)
Hoffmann, Constanze; Zimmermann, Fee; Biek, Roman; Kuehl, Hjalmar; Nowak, Kathrin; Mundry, Roger; Agbor, Anthony; Angedakin, Samuel; Arandjelovic, Mimi; Blankenburg, Anja; Brazolla, Gregory; Corogenes, Katherine; Couacy-Hymann, Emmanuel; Deschner, Tobias; Dieguez, Paula; Dierks, Karsten; Düx, Ariane; Dupke, Susann; Eshuis, Henk; Formenty, Pierre; Yuh, Yisa Ginath; Goedmakers, Annemarie; Gogarten, Jan F.; Granjon, Anne-Céline; McGraw, Scott; Grunow, Roland; Hart, John; Jones, Sorrel; Junker, Jessica; Kiang, John; Langergraber, Kevin; Lapuente, Juan; Lee, Kevin; Leendertz, Siv Aina; Léguillon, Floraine; Leinert, Vera; Löhrich, Therese; Marrocoli, Sergio; Mätz-Rensing, Kerstin; Meier, Amelia; Merkel, Kevin; Metzger, Sonja; Murai, Mizuki; Niedorf, Svenja; de Nys, Hélène; Sachse, Andreas; van Schijndel, Joost; Thiesen, Ulla; Ton, Els; Wu, Doris; Wieler, Lothar H.; Boesch, Christophe; Klee, Silke R.; Wittig, Roman M.; Calvignac-Spencer, Sébastien; Leendertz, Fabian H.
2017-08-01
Anthrax is a globally important animal disease and zoonosis. Despite this, our current knowledge of anthrax ecology is largely limited to arid ecosystems, where outbreaks are most commonly reported. Here we show that the dynamics of an anthrax-causing agent, Bacillus cereus biovar anthracis, in a tropical rainforest have severe consequences for local wildlife communities. Using data and samples collected over three decades, we show that rainforest anthrax is a persistent and widespread cause of death for a broad range of mammalian hosts. We predict that this pathogen will accelerate the decline and possibly result in the extirpation of local chimpanzee (Pan troglodytes verus) populations. We present the epidemiology of a cryptic pathogen and show that its presence has important implications for conservation.
Persistent anthrax as a major driver of wildlife mortality in a tropical rainforest.
Hoffmann, Constanze; Zimmermann, Fee; Biek, Roman; Kuehl, Hjalmar; Nowak, Kathrin; Mundry, Roger; Agbor, Anthony; Angedakin, Samuel; Arandjelovic, Mimi; Blankenburg, Anja; Brazolla, Gregory; Corogenes, Katherine; Couacy-Hymann, Emmanuel; Deschner, Tobias; Dieguez, Paula; Dierks, Karsten; Düx, Ariane; Dupke, Susann; Eshuis, Henk; Formenty, Pierre; Yuh, Yisa Ginath; Goedmakers, Annemarie; Gogarten, Jan F; Granjon, Anne-Céline; McGraw, Scott; Grunow, Roland; Hart, John; Jones, Sorrel; Junker, Jessica; Kiang, John; Langergraber, Kevin; Lapuente, Juan; Lee, Kevin; Leendertz, Siv Aina; Léguillon, Floraine; Leinert, Vera; Löhrich, Therese; Marrocoli, Sergio; Mätz-Rensing, Kerstin; Meier, Amelia; Merkel, Kevin; Metzger, Sonja; Murai, Mizuki; Niedorf, Svenja; De Nys, Hélène; Sachse, Andreas; van Schijndel, Joost; Thiesen, Ulla; Ton, Els; Wu, Doris; Wieler, Lothar H; Boesch, Christophe; Klee, Silke R; Wittig, Roman M; Calvignac-Spencer, Sébastien; Leendertz, Fabian H
2017-08-02
Anthrax is a globally important animal disease and zoonosis. Despite this, our current knowledge of anthrax ecology is largely limited to arid ecosystems, where outbreaks are most commonly reported. Here we show that the dynamics of an anthrax-causing agent, Bacillus cereus biovar anthracis, in a tropical rainforest have severe consequences for local wildlife communities. Using data and samples collected over three decades, we show that rainforest anthrax is a persistent and widespread cause of death for a broad range of mammalian hosts. We predict that this pathogen will accelerate the decline and possibly result in the extirpation of local chimpanzee (Pan troglodytes verus) populations. We present the epidemiology of a cryptic pathogen and show that its presence has important implications for conservation.
Haas, Charles
2006-01-01
During the reign of Marcus Aurelius, the Roman Empire was struck by a long and destructive epidemic. It began in Mesopotamia in late AD 165 or early AD 166 during Verus' Parthian campaign, and quickly spread to Rome. It lasted at least until the death of Marcus Aurelius in AD 180 and likely into the early part of Commodus' reign. Its victims were "innumerable". Galen had first-hand knowledge of the disease. He was in Rome when the plague reached the city in AD 166. He was also present during an outbreak among troops stationed at Aquileia during the winter of AD 168-169. His references to the plague are scattered and brief but enough information is available to firmly identify the plague as smallpox. His description of the exanthema is fairly typical of the smallpox rash, particularly in the hemorrhagic phase of the disease.
Advance directives outside the USA: are they the best solution everywhere?
Sanchez-Gonzalez, M A
1997-09-01
This article evaluates the potential role of advance directives outside of their original North American context. In order to do this, the article first analyses the historical process which has promoted advance directives in recent years. Next, it brings to light certain presuppositions which have given them force: atomistic individualism, contractualism, consumerism and entrepreneurialism, pluralism, proceduralism, and "American moralism." The article next studies certain European cultural peculiarities which could affect advance directives: the importance of virtue versus rights, stoicism versus consumerist utilitarianism, rationalism verus empiricism, statism versus citizens' initiative, and justice versus autonomy. The article concludes by recognising that autonomy has a transcultural value, although it must be balanced with other principles. Advance Directives can have a function in certain cases. But it does not seem adequate to delegate to advance directives more and more medical decisions, and to make them more binding everyday. It is indispensable to develop other decision-making criteria.
Gestural acquisition in great apes: the Social Negotiation Hypothesis.
Pika, Simone; Fröhlich, Marlen
2018-01-24
Scientific interest in the acquisition of gestural signalling dates back to the heroic figure of Charles Darwin. More than a hundred years later, we still know relatively little about the underlying evolutionary and developmental pathways involved. Here, we shed new light on this topic by providing the first systematic, quantitative comparison of gestural development in two different chimpanzee (Pan troglodytes verus and Pan troglodytes schweinfurthii) subspecies and communities living in their natural environments. We conclude that the three most predominant perspectives on gestural acquisition-Phylogenetic Ritualization, Social Transmission via Imitation, and Ontogenetic Ritualization-do not satisfactorily explain our current findings on gestural interactions in chimpanzees in the wild. In contrast, we argue that the role of interactional experience and social exposure on gestural acquisition and communicative development has been strongly underestimated. We introduce the revised Social Negotiation Hypothesis and conclude with a brief set of empirical desiderata for instigating more research into this intriguing research domain.
Female reproductive strategies, paternity and community structure in wild West African chimpanzees.
Gagneux; Boesch; Woodruff
1999-01-01
Although the variability and complexity of chimpanzee behaviour frustrates generalization, it is widely believed that social evolution in this species occurs in the context of the recognizable social group or community. We used a combination of field observations and noninvasive genotyping to study the genetic structure of a habituated community of 55 wild chimpanzees, Pan troglodytes verus, in the Taï Forest, Côte d'Ivoire. Pedigree relationships in that community show that female mate choice strategies are more variable than previously supposed and that the observed social groups are not the exclusive reproductive units. Genetic evidence based on nuclear microsatellite markers and behavioural obser-vations reveal that females in the Taï forest actively seek mating partners outside their social unit; noncommunity males accounted for half the paternities over 5 years. This female mating strategy increases male gene flow between communities despite male philopatry, and negates the predicted higher relatedness among community males. Kin selection seems unlikely to explain the frequent cooperation and sharing observed among group males in this population. Similarly, inbreeding avoidance is probably not the sole cause of permanent adolescent female dispersal as a combination of extragroup mating and avoidance of incest with home group males would allow females to avoid inbreeding without the hazards associated with immigration into a new community. Extragroup mating as part of chimpanzee females' reproductive strategy may allow them to choose from a wider variety and number of males, without losing the resources and support provided by their male social group partners. Copyright 1999 The Association for the Study of Animal Behaviour.
Isotopic ecology and dietary profiles of Liberian chimpanzees.
Smith, Catherine C; Morgan, Michèle E; Pilbeam, David
2010-01-01
An extensive suite of isotopic data (delta(13)C, delta(15)N, and delta(18)O) from enamel apatite and bone collagen of adult male and female wild chimpanzees establishes baseline values for Pan troglodytes verus in a primary rainforest setting. The Ganta chimpanzee sample derives from a restricted region in northern Liberia. Diet is examined using stable light isotopes at three life stages-infant, young juvenile, and adult-and developmental differences are investigated within and between individual males and females. The isotopic data are very homogeneous with few exceptions. Juvenile females show consistent enrichment in (13)C relative to infants, while juvenile males do not. These data suggest that age at weaning may be more variable for male offspring who survive to adulthood than for female offspring. Alternatively, or additionally, the weaning diet of males and females may differ, with greater consumption of technologically extracted insects and/or nuts by young females. Metabolic differences, including growth and hormone-mediated responses, may also contribute to the observed variation. The Ganta chimpanzee data offer an independent and objective line of evidence to primatologists interested in the dietary strategies of the great apes and to paleoanthropologists seeking comparative models for reconstructing early hominin subsistence patterns. Despite the high diversity of dietary items consumed by chimpanzees, isotopic signatures of chimpanzees from a primary rainforest setting exhibit narrow ranges of variation similar to chimpanzees in more open habitats.
Madsen, Elainie Alenkær; Persson, Tomas; Sayehli, Susan; Lenninger, Sara; Sonesson, Göran
2013-01-01
Contagious yawning has been reported for humans, dogs and several non-human primate species, and associated with empathy in humans and other primates. Still, the function, development and underlying mechanisms of contagious yawning remain unclear. Humans and dogs show a developmental increase in susceptibility to yawn contagion, with children showing an increase around the age of four, when also empathy-related behaviours and accurate identification of others’ emotions begin to clearly evince. Explicit tests of yawn contagion in non-human apes have only involved adult individuals and examined the existence of conspecific yawn contagion. Here we report the first study of heterospecific contagious yawning in primates, and the ontogeny of susceptibility thereto in chimpanzees, Pan troglodytes verus. We examined whether emotional closeness, defined as attachment history with the yawning model, affected the strength of contagion, and compared the contagiousness of yawning to nose-wiping. Thirty-three orphaned chimpanzees observed an unfamiliar and familiar human (their surrogate human mother) yawn, gape and nose-wipe. Yawning, but not nose-wiping, was contagious for juvenile chimpanzees, while infants were immune to contagion. Like humans and dogs, chimpanzees are subject to a developmental trend in susceptibility to contagious yawning, and respond to heterospecific yawn stimuli. Emotional closeness with the model did not affect contagion. The familiarity-biased social modulatory effect on yawn contagion previously found among some adult primates, seem to only emerge later in development, or be limited to interactions with conspecifics. The influence of the ‘chameleon effect’, targeted vs. generalised empathy, perspective-taking and visual attention on contagious yawning is discussed. PMID:24146848
Primate archaeology reveals cultural transmission in wild chimpanzees (Pan troglodytes verus).
Luncz, Lydia V; Wittig, Roman M; Boesch, Christophe
2015-11-19
Recovering evidence of past human activities enables us to recreate behaviour where direct observations are missing. Here, we apply archaeological methods to further investigate cultural transmission processes in percussive tool use among neighbouring chimpanzee communities in the Taï National Park, Côte d'Ivoire, West Africa. Differences in the selection of nut-cracking tools between neighbouring groups are maintained over time, despite frequent female transfer, which leads to persistent cultural diversity between chimpanzee groups. Through the recovery of used tools in the suggested natal territory of immigrants, we have been able to reconstruct the tool material selection of females prior to migration. In combination with direct observations of tool selection of local residents and immigrants after migration, we uncovered temporal changes in tool selection for immigrating females. After controlling for ecological differences between territories of immigrants and residents our data suggest that immigrants abandoned their previous tool preference and adopted the pattern of their new community, despite previous personal proficiency of the same foraging task. Our study adds to the growing body of knowledge on the importance of conformist tendencies in animals. © 2015 The Author(s).
Absence of Frequent Herpesvirus Transmission in a Nonhuman Primate Predator-Prey System in the Wild
Murthy, Sripriya; Couacy-Hymann, Emmanuel; Metzger, Sonja; Nowak, Kathrin; De Nys, Helene; Boesch, Christophe; Wittig, Roman; Jarvis, Michael A.; Leendertz, Fabian H.
2013-01-01
Emergence of viruses into the human population by transmission from nonhuman primates (NHPs) represents a serious potential threat to human health that is primarily associated with the increased bushmeat trade. Transmission of RNA viruses across primate species appears to be relatively frequent. In contrast, DNA viruses appear to be largely host specific, suggesting low transmission potential. Herein, we use a primate predator-prey system to study the risk of herpesvirus transmission between different primate species in the wild. The system was comprised of western chimpanzees (Pan troglodytes verus) and their primary (western red colobus, Piliocolobus badius badius) and secondary (black-and-white colobus, Colobus polykomos) prey monkey species. NHP species were frequently observed to be coinfected with multiple beta- and gammaherpesviruses (including new cytomegalo- and rhadinoviruses). However, despite frequent exposure of chimpanzees to blood, organs, and bones of their herpesvirus-infected monkey prey, there was no evidence for cross-species herpesvirus transmission. These findings suggest that interspecies transmission of NHP beta- and gammaherpesviruses is, at most, a rare event in the wild. PMID:23885068
Absence of frequent herpesvirus transmission in a nonhuman primate predator-prey system in the wild.
Murthy, Sripriya; Couacy-Hymann, Emmanuel; Metzger, Sonja; Nowak, Kathrin; De Nys, Helene; Boesch, Christophe; Wittig, Roman; Jarvis, Michael A; Leendertz, Fabian H; Ehlers, Bernhard
2013-10-01
Emergence of viruses into the human population by transmission from nonhuman primates (NHPs) represents a serious potential threat to human health that is primarily associated with the increased bushmeat trade. Transmission of RNA viruses across primate species appears to be relatively frequent. In contrast, DNA viruses appear to be largely host specific, suggesting low transmission potential. Herein, we use a primate predator-prey system to study the risk of herpesvirus transmission between different primate species in the wild. The system was comprised of western chimpanzees (Pan troglodytes verus) and their primary (western red colobus, Piliocolobus badius badius) and secondary (black-and-white colobus, Colobus polykomos) prey monkey species. NHP species were frequently observed to be coinfected with multiple beta- and gammaherpesviruses (including new cytomegalo- and rhadinoviruses). However, despite frequent exposure of chimpanzees to blood, organs, and bones of their herpesvirus-infected monkey prey, there was no evidence for cross-species herpesvirus transmission. These findings suggest that interspecies transmission of NHP beta- and gammaherpesviruses is, at most, a rare event in the wild.
Ahoua, Angora Rémi Constant; Konan, Amoin Georgette; Bonfoh, Bassirou; Koné, Mamidou Witabouna
2015-10-23
Due to their genetic proximity, chimpanzees share with human several diseases including bacterial, fungal and viral infections, such as candidiasis, acquired immune deficiency syndrome (AIDS), Ebola virus disease. However, in its natural environment, chimpanzees are tolerant to several pathogens including simian immunodeficiency virus (SIV), virus related to human immunodeficiency virus (HIV) that contribute to the emergence of opportunistic diseases such as microbial infections. Twenty seven species of plants consumed by chimpanzees were evaluated for their antimicrobial potential against Escherichia coli, Pseudomonas aeruginosa, Staphylococcus aureus, Candida albicans, Candida tropicalis and Candida glabrata using the agar diffusion technique and micro-dilution in 96-well plates. In total 132 extracts (33 dichloromethane, 33 methanol, 33 ethyl acetate and 33 aqueous) were tested. The results showed that 24 extracts (18 %) showed activity against bacteria and 6 extracts (5 %) were active against yeasts. The minimal inhibitory concentrations (MICs) values of active extracts ranged between 23 and 750 μg/ml for bacteria and between 188 and 1500 μg/ml for yeasts. Tristemma coronatum was the most promising on the studied microorganisms followed by Beilschmiedia mannii. The extracts of the two plants indicated by chimpanzees have potential for antimicrobial use in human.
Dental calculus evidence of Taï Forest Chimpanzee plant consumption and life history transitions
NASA Astrophysics Data System (ADS)
Power, Robert C.; Salazar-García, Domingo C.; Wittig, Roman M.; Freiberg, Martin; Henry, Amanda G.
2015-10-01
Dental calculus (calcified dental plaque) is a source of multiple types of data on life history. Recent research has targeted the plant microremains preserved in this mineralised deposit as a source of dietary and health information for recent and past populations. However, it is unclear to what extent we can interpret behaviour from microremains. Few studies to date have directly compared the microremain record from dental calculus to dietary records, and none with long-term observation dietary records, thus limiting how we can interpret diet, food acquisition and behaviour. Here we present a high-resolution analysis of calculus microremains from wild chimpanzees (Pan troglodytes verus) of Taï National Park, Côte d’Ivoire. We test microremain assemblages against more than two decades of field behavioural observations to establish the ability of calculus to capture the composition of diet. Our results show that some microremain classes accumulate as long-lived dietary markers. Phytolith abundance in calculus can reflect the proportions of plants in the diet, yet this pattern is not true for starches. We also report microremains can record information about other dietary behaviours, such as the age of weaning and learned food processing techniques like nut-cracking.
Dental calculus evidence of Taï Forest Chimpanzee plant consumption and life history transitions.
Power, Robert C; Salazar-García, Domingo C; Wittig, Roman M; Freiberg, Martin; Henry, Amanda G
2015-10-19
Dental calculus (calcified dental plaque) is a source of multiple types of data on life history. Recent research has targeted the plant microremains preserved in this mineralised deposit as a source of dietary and health information for recent and past populations. However, it is unclear to what extent we can interpret behaviour from microremains. Few studies to date have directly compared the microremain record from dental calculus to dietary records, and none with long-term observation dietary records, thus limiting how we can interpret diet, food acquisition and behaviour. Here we present a high-resolution analysis of calculus microremains from wild chimpanzees (Pan troglodytes verus) of Taï National Park, Côte d'Ivoire. We test microremain assemblages against more than two decades of field behavioural observations to establish the ability of calculus to capture the composition of diet. Our results show that some microremain classes accumulate as long-lived dietary markers. Phytolith abundance in calculus can reflect the proportions of plants in the diet, yet this pattern is not true for starches. We also report microremains can record information about other dietary behaviours, such as the age of weaning and learned food processing techniques like nut-cracking.
Arroyo, Adrian; Matsuzawa, Tetsuro; de la Torre, Ignacio
2015-01-01
Stone tool use by wild chimpanzees of West Africa offers a unique opportunity to explore the evolutionary roots of technology during human evolution. However, detailed analyses of chimpanzee stone artifacts are still lacking, thus precluding a comparison with the earliest archaeological record. This paper presents the first systematic study of stone tools used by wild chimpanzees to crack open nuts in Bossou (Guinea-Conakry), and applies pioneering analytical techniques to such artifacts. Automatic morphometric GIS classification enabled to create maps of use wear over the stone tools (anvils, hammers, and hammers/ anvils), which were blind tested with GIS spatial analysis of damage patterns identified visually. Our analysis shows that chimpanzee stone tool use wear can be systematized and specific damage patterns discerned, allowing to discriminate between active and passive pounders in lithic assemblages. In summary, our results demonstrate the heuristic potential of combined suites of GIS techniques for the analysis of battered artifacts, and have enabled creating a referential framework of analysis in which wild chimpanzee battered tools can for the first time be directly compared to the early archaeological record. PMID:25793642
Unconstrained cranial evolution in Neandertals and modern humans compared to common chimpanzees
Weaver, Timothy D.; Stringer, Chris B.
2015-01-01
A variety of lines of evidence support the idea that neutral evolutionary processes (genetic drift, mutation) have been important in generating cranial differences between Neandertals and modern humans. But how do Neandertals and modern humans compare with other species? And how do these comparisons illuminate the evolutionary processes underlying cranial diversification? To address these questions, we used 27 standard cranial measurements collected on 2524 recent modern humans, 20 Neandertals and 237 common chimpanzees to estimate split times between Neandertals and modern humans, and between Pan troglodytes verus and two other subspecies of common chimpanzee. Consistent with a neutral divergence, the Neandertal versus modern human split-time estimates based on cranial measurements are similar to those based on DNA sequences. By contrast, the common chimpanzee cranial estimates are much lower than DNA-sequence estimates. Apparently, cranial evolution has been unconstrained in Neandertals and modern humans compared with common chimpanzees. Based on these and additional analyses, it appears that cranial differentiation in common chimpanzees has been restricted by stabilizing natural selection. Alternatively, this restriction could be due to genetic and/or developmental constraints on the amount of within-group variance (relative to effective population size) available for genetic drift to act on. PMID:26468243
Dental calculus evidence of Taï Forest Chimpanzee plant consumption and life history transitions
Power, Robert C.; Salazar-García, Domingo C.; Wittig, Roman M.; Freiberg, Martin; Henry, Amanda G.
2015-01-01
Dental calculus (calcified dental plaque) is a source of multiple types of data on life history. Recent research has targeted the plant microremains preserved in this mineralised deposit as a source of dietary and health information for recent and past populations. However, it is unclear to what extent we can interpret behaviour from microremains. Few studies to date have directly compared the microremain record from dental calculus to dietary records, and none with long-term observation dietary records, thus limiting how we can interpret diet, food acquisition and behaviour. Here we present a high-resolution analysis of calculus microremains from wild chimpanzees (Pan troglodytes verus) of Taï National Park, Côte d’Ivoire. We test microremain assemblages against more than two decades of field behavioural observations to establish the ability of calculus to capture the composition of diet. Our results show that some microremain classes accumulate as long-lived dietary markers. Phytolith abundance in calculus can reflect the proportions of plants in the diet, yet this pattern is not true for starches. We also report microremains can record information about other dietary behaviours, such as the age of weaning and learned food processing techniques like nut-cracking. PMID:26481858
Hayakawa, Takashi; Sugawara, Tohru; Go, Yasuhiro; Udono, Toshifumi; Hirai, Hirohisa; Imai, Hiroo
2012-01-01
Chimpanzees (Pan troglodytes) have region-specific difference in dietary repertoires from East to West across tropical Africa. Such differences may result from different genetic backgrounds in addition to cultural variations. We analyzed the sequences of all bitter taste receptor genes (cTAS2Rs) in a total of 59 chimpanzees, including 4 putative subspecies. We identified genetic variations including single-nucleotide variations (SNVs), insertions and deletions (indels), gene-conversion variations, and copy-number variations (CNVs) in cTAS2Rs. Approximately two-thirds of all cTAS2R haplotypes in the amino acid sequence were unique to each subspecies. We analyzed the evolutionary backgrounds of natural selection behind such diversification. Our previous study concluded that diversification of cTAS2Rs in western chimpanzees (P. t. verus) may have resulted from balancing selection. In contrast, the present study found that purifying selection dominates as the evolutionary form of diversification of the so-called human cluster of cTAS2Rs in eastern chimpanzees (P. t. schweinfurthii) and that the other cTAS2Rs were under no obvious selection as a whole. Such marked diversification of cTAS2Rs with different evolutionary backgrounds among subspecies of chimpanzees probably reflects their subspecies-specific dietary repertoires.
Hayakawa, Takashi; Sugawara, Tohru; Go, Yasuhiro; Udono, Toshifumi; Hirai, Hirohisa; Imai, Hiroo
2012-01-01
Chimpanzees (Pan troglodytes) have region-specific difference in dietary repertoires from East to West across tropical Africa. Such differences may result from different genetic backgrounds in addition to cultural variations. We analyzed the sequences of all bitter taste receptor genes (cTAS2Rs) in a total of 59 chimpanzees, including 4 putative subspecies. We identified genetic variations including single-nucleotide variations (SNVs), insertions and deletions (indels), gene-conversion variations, and copy-number variations (CNVs) in cTAS2Rs. Approximately two-thirds of all cTAS2R haplotypes in the amino acid sequence were unique to each subspecies. We analyzed the evolutionary backgrounds of natural selection behind such diversification. Our previous study concluded that diversification of cTAS2Rs in western chimpanzees (P. t. verus) may have resulted from balancing selection. In contrast, the present study found that purifying selection dominates as the evolutionary form of diversification of the so-called human cluster of cTAS2Rs in eastern chimpanzees (P. t. schweinfurthii) and that the other cTAS2Rs were under no obvious selection as a whole. Such marked diversification of cTAS2Rs with different evolutionary backgrounds among subspecies of chimpanzees probably reflects their subspecies-specific dietary repertoires. PMID:22916235
Evidence for cultural differences between neighboring chimpanzee communities.
Luncz, Lydia V; Mundry, Roger; Boesch, Christophe
2012-05-22
The majority of evidence for cultural behavior in animals has come from comparisons between populations separated by large geographical distances that often inhabit different environments. The difficulty of excluding ecological and genetic variation as potential explanations for observed behaviors has led some researchers to challenge the idea of animal culture. Chimpanzees (Pan troglodytes verus) in the Taï National Park, Côte d'Ivoire, crack Coula edulis nuts using stone and wooden hammers and tree root anvils. In this study, we compare for the first time hammer selection for nut cracking across three neighboring chimpanzee communities that live in the same forest habitat, which reduces the likelihood of ecological variation. Furthermore, the study communities experience frequent dispersal of females at maturity, which eliminates significant genetic variation. We compared key ecological factors, such as hammer availability and nut hardness, between the three neighboring communities and found striking differences in group-specific hammer selection among communities despite similar ecological conditions. Differences were found in the selection of hammer material and hammer size in response to changes in nut resistance over time. Our findings highlight the subtleties of cultural differences in wild chimpanzees and illustrate how cultural knowledge is able to shape behavior, creating differences among neighboring social groups. Copyright © 2012 Elsevier Ltd. All rights reserved.
Stone, Anne C; Battistuzzi, Fabia U; Kubatko, Laura S; Perry, George H; Trudeau, Evan; Lin, Hsiuman; Kumar, Sudhir
2010-10-27
Here, we report the sequencing and analysis of eight complete mitochondrial genomes of chimpanzees (Pan troglodytes) from each of the three established subspecies (P. t. troglodytes, P. t. schweinfurthii and P. t. verus) and the proposed fourth subspecies (P. t. ellioti). Our population genetic analyses are consistent with neutral patterns of evolution that have been shaped by demography. The high levels of mtDNA diversity in western chimpanzees are unlike those seen at nuclear loci, which may reflect a demographic history of greater female to male effective population sizes possibly owing to the characteristics of the founding population. By using relaxed-clock methods, we have inferred a timetree of chimpanzee species and subspecies. The absolute divergence times vary based on the methods and calibration used, but relative divergence times show extensive uniformity. Overall, mtDNA produces consistently older times than those known from nuclear markers, a discrepancy that is reduced significantly by explicitly accounting for chimpanzee population structures in time estimation. Assuming the human-chimpanzee split to be between 7 and 5 Ma, chimpanzee time estimates are 2.1-1.5, 1.1-0.76 and 0.25-0.18 Ma for the chimpanzee/bonobo, western/(eastern + central) and eastern/central chimpanzee divergences, respectively.
Yamakoshi, G
1998-07-01
A 13-month ecological study was conducted at Bossou, Guinea, West Africa, to elucidate how a community of wild chimpanzees (Pan troglodytes verus) deals with the scarcity of main foods. During the study period, fruit availability fluctuated radically. The chimpanzees were confirmed to depend heavily on three "keystone resources" which were available when their main foods (fruit pulp) were scarce. These were fruits of Musanga cecropioides, oil-palm (Elaeis guineensis) nuts, and oil-palm pith. These are abundant in the chimpanzees' home range and their nutritional contents compensate for a decrease in nutritional intake from fruit pulp. The presence of these excellent backup foods may explain the high reproductive performance of Bossou chimpanzees. Here, chimpanzees consumed two of the three keystone foods using two types of tool behavior: nut-cracking for oil-palm nuts and pestle-pounding for oil-palm pith. These tool-using behaviors accounted for 31.9% of the total feeding time spent in June (the month in which the highest frequency occurred) and 10.4% in total for the year. It is suggested that the Bossou chimpanzees depend strongly on tools for their subsistence. This implies a possible function for tool technology in the evolution of our human ancestors.
The Critically Endangered western chimpanzee declines by 80.
Kühl, Hjalmar S; Sop, Tenekwetche; Williamson, Elizabeth A; Mundry, Roger; Brugière, David; Campbell, Genevieve; Cohen, Heather; Danquah, Emmanuel; Ginn, Laura; Herbinger, Ilka; Jones, Sorrel; Junker, Jessica; Kormos, Rebecca; Kouakou, Celestin Y; N'Goran, Paul K; Normand, Emma; Shutt-Phillips, Kathryn; Tickle, Alexander; Vendras, Elleni; Welsh, Adam; Wessling, Erin G; Boesch, Christophe
2017-09-01
African large mammals are under extreme pressure from unsustainable hunting and habitat loss. Certain traits make large mammals particularly vulnerable. These include late age at first reproduction, long inter-birth intervals, and low population density. Great apes are a prime example of such vulnerability, exhibiting all of these traits. Here we assess the rate of population change for the western chimpanzee, Pan troglodytes verus, over a 24-year period. As a proxy for change in abundance, we used transect nest count data from 20 different sites archived in the IUCN SSC A.P.E.S. database, representing 25,000 of the estimated remaining 35,000 western chimpanzees. For each of the 20 sites, datasets for 2 different years were available. We estimated site-specific and global population change using Generalized Linear Models. At 12 of these sites, we detected a significant negative trend. The estimated change in the subspecies abundance, as approximated by nest encounter rate, yielded a 6% annual decline and a total decline of 80.2% over the study period from 1990 to 2014. This also resulted in a reduced geographic range of 20% (657,600 vs. 524,100 km 2 ). Poverty, civil conflict, disease pandemics, agriculture, extractive industries, infrastructure development, and lack of law enforcement, are some of the many reasons for the magnitude of threat. Our status update triggered the uplisting of the western chimpanzee to "Critically Endangered" on the IUCN Red List. In 2017, IUCN will start updating the 2003 Action Plan for western chimpanzees and will provide a consensus blueprint for what is needed to save this subspecies. We make a plea for greater commitment to conservation in West Africa across sectors. Needed especially is more robust engagement by national governments, integration of conservation priorities into the private sector and development planning across the region and sustained financial support from donors. © 2017 Wiley Periodicals, Inc.
Pruetz, J D; Fulton, S J; Marchant, L F; McGrew, W C; Schiel, M; Waller, M
2008-04-01
Chimpanzees (Pan troglodytes) make nests for resting and sleeping, which is unusual for anthropoid primates but common to all great apes. Arboreal nesting has been linked to predation pressure, but few studies have tested the adaptive nature of this behavior. We collected data at two chimpanzee study sites in southeastern Senegal that differed in predator presence to test the hypothesis that elevated sleeping platforms are adaptations for predator defense. At Assirik in the Parc National du Niokolo-Koba, chimpanzees face four species of large carnivore, whereas at Fongoli, outside national park boundaries, humans have exterminated almost all natural predators. We quantified the availability of vegetation at the two sites to test the alternative hypothesis that differences in nesting reflect differences in habitat structure. We also examined possible sex differences in nesting behavior, community demographic differences, seasonality and nest age differences as variables also potentially affecting nest characteristics and nesting behavior between the two sites. Chimpanzees at Fongoli nested at lower heights and farther apart than did chimpanzees at Assirik and sometimes made nests on the ground. The absence of predators outside of the national park may account for the differences in nest characteristics at the two sites, given the similarities in habitat structure between Fongoli and Assirik. However, Fongoli chimpanzees regularly build arboreal nests for sleeping, even under minimal predation pressure, and this requires explanation.
Koops, Kathelijne; Humle, Tatyana; Sterck, Elisabeth H M; Matsuzawa, Tetsuro
2007-04-01
The chimpanzees (Pan troglodytes verus) of the Nimba Mountains, Guinea, West Africa, commonly make both elaborate ("night") and simple ("day") nests on the ground. In this study we investigated which factors might influence ground-nesting in this population, and tested two ecological hypotheses: 1) climatic conditions, such as high wind speeds at high altitudes, may deter chimpanzees from nesting in trees; and 2) a lack of appropriate arboreal nesting opportunities may drive the chimpanzees to nest on the ground. In addition to testing these two hypotheses, we explored whether ground-nesting is a sex-linked behavior. Data were collected monthly between August 2003 and May 2004 along transects and ad libitum. To identify the sex of ground-nesting individuals, we used DNA extracted from hair samples. The results showed that the occurrence and distribution of ground nests were not affected by climatic conditions or a lack of appropriate nest trees. Support was found for the notion that ground-nesting is a sex-linked behavior, as males were responsible for building all of the elaborate ground nests and most of the simple ground nests sampled. Elaborate ground nests occurred mostly in nest groups associated with tree nests, whereas simple ground nests usually occurred without tree nests in their vicinity. These results suggest that ground-nesting may be socially, rather than ecologically, determined.
Stable isotope evidence of meat eating and hunting specialization in adult male chimpanzees
Fahy, Geraldine E.; Richards, Michael; Riedel, Julia; Hublin, Jean-Jacques; Boesch, Christophe
2013-01-01
Observations of hunting and meat eating in our closest living relatives, chimpanzees (Pan troglodytes), suggest that among primates, regular inclusion of meat in the diet is not a characteristic unique to Homo. Wild chimpanzees are known to consume vertebrate meat, but its actual dietary contribution is, depending on the study population, often either unknown or minimal. Constraints on continual direct observation throughout the entire hunting season mean that behavioral observations are limited in their ability to accurately quantify meat consumption. Here we present direct stable isotope evidence supporting behavioral observations of frequent meat eating among wild adult male chimpanzees (Pan troglodytes verus) in Taï National Park, Côte d’Ivoire. Meat eating among some of the male chimpanzees is significant enough to result in a marked isotope signal detectable on a short-term basis in their hair keratin and long-term in their bone collagen. Although both adult males and females and juveniles derive their dietary protein largely from daily fruit and seasonal nut consumption, our data indicate that some adult males also derive a large amount of dietary protein from hunted meat. Our results reinforce behavioral observations of male-dominated hunting and meat eating in adult Taï chimpanzees, suggesting that sex differences in food acquisition and consumption may have persisted throughout hominin evolution, rather than being a recent development in the human lineage. PMID:23530185
Galen and the beginnings of Western physiology.
West, John B
2014-07-15
Galen (129-c. 216 AD) was a key figure in the early development of Western physiology. His teachings incorporated much of the ancient Greek traditions including the work of Hippocrates and Aristotle. Galen himself was a well-educated Greco-Roman physician and physiologist who at one time was a physician to the gladiators in Pergamon. Later he moved to Rome, where he was associated with the Roman emperors Marcus Aurelius and Lucius Verus. The Galenical school was responsible for voluminous writings, many of which are still extant. One emphasis was on the humors of the body, which were believed to be important in disease. Another was the cardiopulmonary system, including the belief that part of the blood from the right ventricle could enter the left through the interventricular septum. An extraordinary feature of these teachings is that they dominated thinking for some 1,300 years and became accepted as dogma by both the State and Church. One of the first anatomists to challenge the Galenical teachings was Andreas Vesalius, who produced a magnificent atlas of human anatomy in 1543. At about the same time Michael Servetus described the pulmonary transit of blood, but he was burned at the stake for heresy. Finally, with William Harvey and others in the first part of the 17th century, the beginnings of modern physiology emerged with an emphasis on hypotheses and experimental data. Nevertheless, vestiges of Galen's teaching survived into the 19th century. Copyright © 2014 the American Physiological Society.
Luncz, Lydia V; Boesch, Christophe
2014-07-01
The notion of animal culture has been well established mainly through research aiming at uncovering differences between populations. In chimpanzees (Pan troglodytes verus), cultural diversity has even been found in neighboring communities, where differences were observed despite frequent immigration of individuals. Female chimpanzees transfer at the onset of sexual maturity at an age, when the behavioral repertoire is fully formed. With immigrating females, behavioral variety enters the group. Little is known about the diversity and the longevity of cultural traits within a community. This study is building on previous findings of differences in hammer selection when nut cracking between neighboring communities despite similar ecological conditions. We now further investigated the diversity and maintenance of cultural traits within one chimpanzee community and were able to show high levels of uniformity in group-specific behavior. Fidelity to the behavior pattern did not vary between dispersing females and philopatric males. Furthermore, group-specific tool selection remained similar over a period of 25 years. Additionally, we present a study case on how one newly immigrant female progressively behaved more similar to her new group, suggesting that the high level of similarity in behavior is actively adopted by group members possibly even when originally expressing the behavior in another form. Taken together, our data support a cultural transmission process in adult chimpanzees, which leads to persisting cultural behavior of one community over time. © 2014 Wiley Periodicals, Inc.
Luncz, Lydia V; Boesch, Christophe
2015-01-01
Chimpanzees show cultural differences among populations across Africa but also between neighboring communities. The extent of these differences among neighbors, however, remains largely unknown. Comparing three neighboring chimpanzee community in the Taï National Park, Côte d'Ivoire, we found 27 putative cultural traits, including tool use, foraging, social interaction, communication and hunting behavior, exceeding by far previously known diversity. As foraging behavior is predominantly influenced by the environment, we further compared in detail ecological circumstances underlying insectivore feeding behavior to analyze whether foraging differences on Dorylus ants and Thoracotermes termites seen between neighboring chimpanzee communities were caused by environmental factors. Differences in the prey characteristics of Dorylus ants (aggression level, running speed, and nest structure) that could influence the behavior of chimpanzees were excluded, suggesting that the observed group-specific variation is not ecologically driven. Only one community preyed on Thoracotermes termites despite a similar abundance of termite mounds in all three territories, supporting the idea that this difference is also not shaped by the environment. Therefore, our study suggests that transmission of cultural knowledge plays a role in determining insectivory prey behavior. This behavioral plasticity, independent of ecological conditions, can lead to large numbers of cultural diversification between neighboring chimpanzee communities. These findings not only deepen our understanding of the cultural abilities of chimpanzees in the wild but also open up possible future comparisons of the origin of cultural diversification among humans and chimpanzees. © 2014 Wiley Periodicals, Inc.
Solar Orientation of Irish Early Christian Oratories
NASA Astrophysics Data System (ADS)
Tiede, V. R.
2001-12-01
The Hiberno-Latin literary metaphor of "Xpistus sol verus" (Christ the True Sun) finds an architectural analogue in the orientation of the single eastern window of Irish monastic stone chapels or oratories. The author's field surveys in Ireland, Hebrides, Orkney and Shetlands revealed that the window of Irish rectangular dry stone oratories framed the rising solar disk on the Feast Days of selected saints of the Celtic Early Christian Church, AD 800-1100. The most frequent target skyline declinations were to sunrise on the Feast Days of St. Patrick (March 17th) and St. Aidan of Lindisfarne (August 31st). During the Early Christian period, St. Patrick's Day coincided with the Vernal Equinox, and heralded the Paschal Full Moon (i.e., Passover crucifixion) and Easter Sunday as proclaimed by Emperor Constantine at the Council of Nicaea (AD 325). St. Aidan of Lindisfarne (d. AD 651) inspired the Irish monks who, at the Synod of Whitby (AD 664), remained loyal to the Jewish 84-year cycle determining Passover and refused to replace it with the new orthodox 19-year computus for Easter adopted by the Roman Catholic Church (AD 527). Hypothetical affiliation between monastic communities whose oratories share common solar orientation, interior length/width ratios (e.g., 4:3 and 3:2) and units of measurement (e.g., Scottish ell, Coptic cubit, or Roman pes) is discussed. Grateful acknowledgement is made to the Michael D. Coe Fund and Augusta Hazard Fund of Yale University for research grant support in 1999.
Kouassi, Roland Yao Wa; McGraw, Scott William; Yao, Patrick Kouassi; Abou-Bacar, Ahmed; Brunet, Julie; Pesson, Bernard; Bonfoh, Bassirou; N’goran, Eliezer Kouakou; Candolfi, Ermanno
2015-01-01
Parasites and infectious diseases are well-known threats to primate populations. The main objective of this study was to provide baseline data on fecal parasites in the cercopithecid monkeys inhabiting Côte d’Ivoire’s Taï National Park. Seven of eight cercopithecid species present in the park were sampled: Cercopithecus diana, Cercopithecus campbelli, Cercopithecus petaurista, Procolobus badius, Procolobus verus, Colobus polykomos, and Cercocebus atys. We collected 3142 monkey stool samples between November 2009 and December 2010. Stool samples were processed by direct wet mount examination, formalin-ethyl acetate concentration, and MIF (merthiolate, iodine, formalin) concentration methods. Slides were examined under microscope and parasite identification was based on the morphology of cysts, eggs, and adult worms. A total of 23 species of parasites was recovered including 9 protozoa (Entamoeba coli, Entamoeba histolytica/dispar, Entamoeba hartmanni, Endolimax nana, Iodamoeba butschlii, Chilomastix mesnili, Giardia sp., Balantidium coli, and Blastocystis sp.), 13 nematodes (Oesophagostomum sp., Ancylostoma sp., Anatrichosoma sp., Capillariidae Gen. sp. 1, Capillariidae Gen. sp. 2, Chitwoodspirura sp., Subulura sp., spirurids [cf Protospirura muricola], Ternidens sp., Strongyloides sp., Trichostrongylus sp., and Trichuris sp.), and 1 trematode (Dicrocoelium sp.). Diversity indices and parasite richness were high for all monkey taxa, but C. diana, C. petaurista, C. atys, and C. campbelli exhibited a greater diversity of parasite species and a more equitable distribution. The parasitological data reported are the first available for these cercopithecid species within Taï National Park. PMID:25619957
Histological Effects of Enamel Matrix Derivative on Exposed Dental Pulp.
Bajić, Marijana Popović; Danilović, Vesna; Prokić, Branislav; Prokić, Bogomir Bolka; Manojlović, Milica; Živković, Slavoljub
2015-01-01
Direct pulp capping procedure is a therapeutic application of a drug on exposed tooth pulp in order to ensure the closure of the pulp chamber and to allow the healing process to take place. The aim of this study was to examine the histological effects of Emdogain® on exposed tooth pulp of a Vietnamese pig (Sus scrofa verus). The study comprised 20 teeth of a Vietnamese pig. After class V preparation on the buccal surfaces of incisors, canines and first premolars, pulp was exposed. In the experimental group, the perforations were capped with Emdogain® (Straumann, Basel, Switzerland), while in the control group pulp capping was performed with MTA® (Dentsply Tulsa Dental, Johnson City, TN, USA). All cavities were restored with glass-ionomer cement (GC Fuji VIII, GC Corporation, Tokyo, Japan). The observational period was 28 days, after which the animal was sacrificed and histological preparations were made. A light microscope was used to analyze dentin bridge formation, tissue reorganization and inflammation, and the presence of bacteria in the pulp. The formation of dentin bridge was observed in the experimental and control groups. Inflammation of the pulp was mild to moderate in both groups. Angiogenesis and many odontoblast-like cells, responsible for dentin bridge formation, were observed. Necrosis was not observed in any case, nor were bacteria present in the pulp. Histological analysis indicated a favorable therapeutic effect of Emdogain® Gel in direct pulp capping of Vietnamese pigs. Pulp reaction was similar to that of MTA®.
Wild chimpanzees plan their breakfast time, type, and location
Janmaat, Karline R. L.; Polansky, Leo; Ban, Simone Dagui; Boesch, Christophe
2014-01-01
Not all tropical fruits are equally desired by rainforest foragers and some fruit trees get depleted more quickly and carry fruit for shorter periods than others. We investigated whether a ripe-fruit specialist, the chimpanzee (Pan troglodytes verus), arrived earlier at breakfast sites with very ephemeral and highly sought-after fruit, like figs, than sites with less ephemeral fruit that can be more predictably obtained throughout the entire day. We recorded when and where five adult female chimpanzees spent the night and acquired food for a total of 275 full days during three fruit-scarce periods in a West African tropical rainforest. We found that chimpanzees left their sleeping nests earlier (often before sunrise when the forest is still dark) when breakfasting on very ephemeral fruits, especially when they were farther away. Moreover, the females positioned their sleeping nests more in the direction of the next day’s breakfast sites with ephemeral fruit compared with breakfast sites with other fruit. By analyzing departure times and nest positioning as a function of fruit type and location, while controlling for more parsimonious explanations, such as temperature, we found evidence that wild chimpanzees flexibly plan their breakfast time, type, and location after weighing multiple disparate pieces of information. Our study reveals a cognitive mechanism by which large-brained primates can buffer the effects of seasonal declines in food availability and increased interspecific competition to facilitate first access to nutritious food. We discuss the implications for theories on hominoid brain-size evolution. PMID:25349399
Archaeogeophysical Surveys on Mersin, Silifke, Uzuncaburç (Diokaisareia) Zeus Olbios Temple
NASA Astrophysics Data System (ADS)
Ahmet Yüksel, Fethi; Deniz, Hazel; Şahin, Hamdi
2017-04-01
The ancient city of Diocaesarea (Uzuncaburç), located 30 km north Silifke in Mersin, was a temple centre subjected to Olba in the Hellenistic period. It was declared as free city by Tiberius in the Early Imperial period and it flourished until the 5th century AD. During this period, a Thykhaion to the west of the city was built in the 1st century AD by Obrimos and his son Oppius from his wife Kyria, daughter of Leonidas. A theater was also erected in the co-reign of Marcus Aurelius and Lucius Verus and the city gate in the west of Diocaesarea was repaired under Arcadius and Honorius (396-408 AD). It was financed by the dux ad comes of Isauria, Leontios. In July 2011, archaeogeophysical measurements were made on the columns of the town of Zeus Olbios and on the peripteral Street of the city by magnetic methods. The purpose of these investigations is to determine the presence of architectural remains under the ground at the points specified. G-858 Cesium Gradiometer (G-858 Cesium Gradiometer) was used for magnetic measurement. These measurements were made on 38 pitches of 20 m length in Zeus Olbios temple on 13 creeks of 160 m length on the city's columned street. obtained sub-sensor, top sensor and gradient magnetic maps are created. Linear, angular locations with high susceptibilty were identified on magnetic maps. Keywords: Magnetic, Diokaisareia (Uzuncahurç), Archaeogeophysics, Archaeology, Cesium Gradiometer
Taï chimpanzees anticipate revisiting high-valued fruit trees from further distances.
Ban, Simone D; Boesch, Christophe; Janmaat, Karline R L
2014-11-01
The use of spatio-temporal memory has been argued to increase food-finding efficiency in rainforest primates. However, the exact content of this memory is poorly known to date. This study investigated what specific information from previous feeding visits chimpanzees (Pan troglodytes verus), in Taï National Park, Côte d'Ivoire, take into account when they revisit the same feeding trees. By following five adult females for many consecutive days, we tested from what distance the females directed their travels towards previously visited feeding trees and how previous feeding experiences and fruit tree properties influenced this distance. To exclude the influence of sensory cues, the females' approach distance was measured from their last significant change in travel direction until the moment they entered the tree's maximum detection field. We found that chimpanzees travelled longer distances to trees at which they had previously made food grunts and had rejected fewer fruits compared to other trees. In addition, the results suggest that the chimpanzees were able to anticipate the amount of fruit that they would find in the trees. Overall, our findings are consistent with the hypothesis that chimpanzees act upon a retrieved memory of their last feeding experiences long before they revisit feeding trees, which would indicate a daily use of long-term prospective memory. Further, the results are consistent with the possibility that positive emotional experiences help to trigger prospective memory retrieval in forest areas that are further away and have fewer cues associated with revisited feeding trees.
Akl, Elie A; Labedi, Nawman; Terrenato, Irene; Barba, Maddalena; Sperati, Francesca; Sempos, Elena V; Muti, Paola; Cook, Deborah; Schünemann, Holger
2011-11-09
The choice of the appropriate perioperative thromboprophylaxis in patients with cancer depends on the relative benefits and harms of low molecular weight heparin (LMWH) and unfractionated heparin (UFH). To systematically review the evidence for the relative efficacy and safety of LMWH and UFH for perioperative thromboprophylaxis in patients with cancer. A comprehensive search for trials of anticoagulation in cancer patients including a February 2010 electronic search of: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and ISI Web of Science. Randomized controlled trials (RCTs) that enrolled cancer patients undergoing a surgical intervention and compared the effects of LMWH to UFH on mortality, deep venous thrombosis (DVT), pulmonary embolism (PE), bleeding outcomes, and thrombocytopenia. Two review authors used a standardized form to independently extract in duplicate data on risk of bias, participants, interventions and outcomes of interest. Where possible, we conducted meta-analyses using the random-effects model. Of 8187 identified citations, we included 16 RCTs with 11,847 patients in the meta-analyses, all using preoperative prophylactic anticoagulation. The overall quality of evidence was moderate. The meta-analysis did not conclusively rule out either a beneficial or harmful effect of LMWH compared to UFH for the following outcomes: mortality (RR = 0.90; 95% CI 0.73 to 1.10), symptomatic DVT (RR = 0.73; 95% CI 0.23 to 2.28), PE (RR = 0.59; 95% CI 0.25 to1.41), minor bleeding (RR = 0.88; 95% CI 0.47 to 1.66) and major bleeding (RR = 0.84; 95% CI 0.52 to 1.36). LMWH was associated with lower incidence of wound hematoma (RR = 0.60; 95% CI 0.43, 0.84) while UFH was associated with higher incidence of intra-operative transfusion (RR = 1.16; 95% CI 0.69,1.62). We found no difference between perioperative thromboprophylaxis with LMWH verus UFH in their effects on mortality and embolic outcomes in patients with cancer. Further trials are needed to more carefully evaluate the benefits and harms of different heparin thromboprophylaxis strategies in this population.
Bataillon, Thomas; Duan, Jinjie; Hvilsom, Christina; Jin, Xin; Li, Yingrui; Skov, Laurits; Glemin, Sylvain; Munch, Kasper; Jiang, Tao; Qian, Yu; Hobolth, Asger; Wang, Jun; Mailund, Thomas; Siegismund, Hans R; Schierup, Mikkel H
2015-03-30
We study genome-wide nucleotide diversity in three subspecies of extant chimpanzees using exome capture. After strict filtering, Single Nucleotide Polymorphisms and indels were called and genotyped for greater than 50% of exons at a mean coverage of 35× per individual. Central chimpanzees (Pan troglodytes troglodytes) are the most polymorphic (nucleotide diversity, θw = 0.0023 per site) followed by Eastern (P. t. schweinfurthii) chimpanzees (θw = 0.0016) and Western (P. t. verus) chimpanzees (θw = 0.0008). A demographic scenario of divergence without gene flow fits the patterns of autosomal synonymous nucleotide diversity well except for a signal of recent gene flow from Western into Eastern chimpanzees. The striking contrast in X-linked versus autosomal polymorphism and divergence previously reported in Central chimpanzees is also found in Eastern and Western chimpanzees. We show that the direction of selection statistic exhibits a strong nonmonotonic relationship with the strength of purifying selection S, making it inappropriate for estimating S. We instead use counts in synonymous versus nonsynonymous frequency classes to infer the distribution of S coefficients acting on nonsynonymous mutations in each subspecies. The strength of purifying selection we infer is congruent with the differences in effective sizes of each subspecies: Central chimpanzees are undergoing the strongest purifying selection followed by Eastern and Western chimpanzees. Coding indels show stronger selection against indels changing the reading frame than observed in human populations. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Kouassi, Roland Yao Wa; McGraw, Scott William; Yao, Patrick Kouassi; Abou-Bacar, Ahmed; Brunet, Julie; Pesson, Bernard; Bonfoh, Bassirou; N'goran, Eliezer Kouakou; Candolfi, Ermanno
2015-01-01
Parasites and infectious diseases are well-known threats to primate populations. The main objective of this study was to provide baseline data on fecal parasites in the cercopithecid monkeys inhabiting Côte d'Ivoire's Taï National Park. Seven of eight cercopithecid species present in the park were sampled: Cercopithecus diana, Cercopithecus campbelli, Cercopithecus petaurista, Procolobus badius, Procolobus verus, Colobus polykomos, and Cercocebus atys. We collected 3142 monkey stool samples between November 2009 and December 2010. Stool samples were processed by direct wet mount examination, formalin-ethyl acetate concentration, and MIF (merthiolate, iodine, formalin) concentration methods. Slides were examined under microscope and parasite identification was based on the morphology of cysts, eggs, and adult worms. A total of 23 species of parasites was recovered including 9 protozoa (Entamoeba coli, Entamoeba histolytica/dispar, Entamoeba hartmanni, Endolimax nana, Iodamoeba butschlii, Chilomastix mesnili, Giardia sp., Balantidium coli, and Blastocystis sp.), 13 nematodes (Oesophagostomum sp., Ancylostoma sp., Anatrichosoma sp., Capillariidae Gen. sp. 1, Capillariidae Gen. sp. 2, Chitwoodspirura sp., Subulura sp., spirurids [cf Protospirura muricola], Ternidens sp., Strongyloides sp., Trichostrongylus sp., and Trichuris sp.), and 1 trematode (Dicrocoelium sp.). Diversity indices and parasite richness were high for all monkey taxa, but C. diana, C. petaurista, C. atys, and C. campbelli exhibited a greater diversity of parasite species and a more equitable distribution. The parasitological data reported are the first available for these cercopithecid species within Taï National Park. © R.W.Y. Kouassi et al., published by EDP Sciences, 2015.
Roeder, Amy D.; Bruford, Michael W.; Noë, Ronald; Delaporte, Eric; Peeters, Martine
2013-01-01
It is now well established that the human immunodeficiency viruses, HIV-1 and HIV-2, are the results of cross-species transmissions of simian immunodeficiency viruses (SIV) naturally infecting nonhuman primates in sub-Saharan Africa. SIVs are found in many African primates, and humans continue to be exposed to these viruses by hunting and handling primate bushmeat. Sooty mangabeys (Cercocebus atys) and western red colobus (Piliocolobus badius badius) are infected with SIV at a high rate in the Taï Forest, Côte d’Ivoire. We investigated the SIV infection and prevalence in 6 other monkey species living in the Taï Forest using noninvasive methods. We collected 127 fecal samples from 2 colobus species (Colobus polykomos and Procolobus verus) and 4 guenon species (C. diana, C. campbelli, C. petaurista, and C. nictitans). We tested these samples for HIV cross-reactive antibodies and performed reverse transcriptase-polymerase chain reactions (RT-PCR) targeting the gag, pol, and env regions of the SIV genome. We screened 16 human microsatellites for use in individual discrimination and identified 4–6 informative markers per species. Serological analysis of 112 samples yielded negative (n=86) or uninterpretable (n=26) results. PCR analysis on 74 samples confirmed the negative results. These results may reflect either the limited number of individuals sampled or a low prevalence of infection. Further research is needed to improve the sensitivity of noninvasive methods for SIV detection. PMID:23950618
Wang, Li; Wu, Ya-Mei; Cao, Yong-Bin; Li, Xiao-Hong; Xu, Li-Xin; Wang, Hai-Tao; Gao, Ya-Hui; Wu, Xiao-Xiong
2016-12-01
To analyse the feasibility and compare differences between hematopoietic reconstitution and prognosis of patients with severe aplastic anemia(SAA) after matched sibling donor (MSD) or haploidentical family donor (HFD) hematopoietic stem cell transplantation (HSCT) using the modified FC/ATG conditioning. The clinical data of 56 patients with SAA who received HSCT in First Affiliated Hospital of Chinese PLA General Hospital from January 2011 to June 2016 were analyzed retrospectively. The hematopoietic reconstitution, graft verus host disease (GVHD), transplantation related toxicity (TRT) and prognosis after transplantation were compared. Furthermore, the modifed conditioning FC/ATG included low-dose cyclophosphamide (total dose 100 mg/kg), infustion of third-party donor-derived mesenchymal stem cells. All 56 patients with MSD-HSCT or HFD-HSCT achieved hematopoietic reconstitution. Among them, not only the recovery of neutrophils and platelets, but also the incidences of III-IV aGVHD, extensive cGVHD and TRT were not significantly different (the P value were 0.58, 0.61, 0.73, 0.73 and 0.67, respectively). After following-up for 32(2-66) months, 48 patients alive well, the 1-year overall survival rates were 86% in HFD-HSCT group and 89% in MSD-HSCT group, respectively (P=0.58). After HSCT using the modifed FC/ATG conditioning, patients with SAA achieved stable engraftment, low toxicity, mild GVHD and excellent outcomes. Furthermore, the HFD-HSCT achieved comparable outcomes to MSD-HSCT and may be served as an alternate therapy for patients with SAA.
On the Way to Appropriate Model Complexity
NASA Astrophysics Data System (ADS)
Höge, M.
2016-12-01
When statistical models are used to represent natural phenomena they are often too simple or too complex - this is known. But what exactly is model complexity? Among many other definitions, the complexity of a model can be conceptualized as a measure of statistical dependence between observations and parameters (Van der Linde, 2014). However, several issues remain when working with model complexity: A unique definition for model complexity is missing. Assuming a definition is accepted, how can model complexity be quantified? How can we use a quantified complexity to the better of modeling? Generally defined, "complexity is a measure of the information needed to specify the relationships between the elements of organized systems" (Bawden & Robinson, 2015). The complexity of a system changes as the knowledge about the system changes. For models this means that complexity is not a static concept: With more data or higher spatio-temporal resolution of parameters, the complexity of a model changes. There are essentially three categories into which all commonly used complexity measures can be classified: (1) An explicit representation of model complexity as "Degrees of freedom" of a model, e.g. effective number of parameters. (2) Model complexity as code length, a.k.a. "Kolmogorov complexity": The longer the shortest model code, the higher its complexity (e.g. in bits). (3) Complexity defined via information entropy of parametric or predictive uncertainty. Preliminary results show that Bayes theorem allows for incorporating all parts of the non-static concept of model complexity like data quality and quantity or parametric uncertainty. Therefore, we test how different approaches for measuring model complexity perform in comparison to a fully Bayesian model selection procedure. Ultimately, we want to find a measure that helps to assess the most appropriate model.
Fünfstück, Tillmann; Arandjelovic, Mimi; Morgan, David B.; Sanz, Crickette; Reed, Patricia; Olson, Sarah H.; Cameron, Ken; Ondzie, Alain; Peeters, Martine; Vigilant, Linda
2015-01-01
Populations of an organism living in marked geographical or evolutionary isolation from other populations of the same species are often termed subspecies and expected to show some degree of genetic distinctiveness. The common chimpanzee (Pan troglodytes) is currently described as four geographically delimited subspecies: the western (P. t. verus), the nigerian-cameroonian (P. t. ellioti), the central (P. t. troglodytes) and the eastern (P. t. schweinfurthii) chimpanzees. Although these taxa would be expected to be reciprocally monophyletic, studies have not always consistently resolved the central and eastern chimpanzee taxa. Most studies, however, used data from individuals of unknown or approximate geographic provenance. Thus, genetic data from samples of known origin may shed light on the evolutionary relationship of these subspecies. We generated microsatellite genotypes from noninvasively collected fecal samples of 185 central chimpanzees that were sampled across large parts of their range and analyzed them together with 283 published eastern chimpanzee genotypes from known localities. We observed a clear signal of isolation by distance across both subspecies. Further, we found that a large proportion of comparisons between groups taken from the same subspecies showed higher genetic differentiation than the least differentiated between-subspecies comparison. This proportion decreased substantially when we simulated a more clumped sampling scheme by including fewer groups. Our results support the general concept that the distribution of the sampled individuals can dramatically affect the inference of genetic population structure. With regard to chimpanzees, our results emphasize the close relationship of equatorial chimpanzees from central and eastern equatorial Africa and the difficult nature of subspecies definitions. PMID:25330245
Fahy, Geraldine E; Richards, Michael P; Fuller, Benjamin T; Deschner, Tobias; Hublin, Jean-Jacques; Boesch, Christophe
2014-04-01
Offspring provisioning is one of the most energetically demanding aspects of reproduction for female mammals. Variation in lactation length and weaning strategies between chimpanzees (Pan troglodytes), our closest living relative, and modern human societies have been reported. When and why these changes occurred is frequently debated. Our study used stable nitrogen isotope data of tooth root dentine from wild Western chimpanzees (Pan troglodytes verus) in Taï National Park, Côte d'Ivoire, to quantify weaning in these chimpanzees and explore if infant sex plays a role in maternal investment. We analyzed serial sections of deciduous lateral incisor root dentine from four Taï chimpanzees to establish the δ(15) N signal of nursing infants; we then analyzed serial sections of first permanent mandibular molar root dentine from 12 Taï chimpanzees to provide quantitative δ(15) N data on weaning in this population. Up to 2 years of age both sexes exhibited dentine δ(15) N values ≈2-3‰ higher than adult female Taï chimpanzees, consistent with a nursing signal. Thereafter a steady decrease in δ(15) N values consistent with the onset, and progression, of weaning, was visible. Sex differences were also evident, where male δ(15) N values decreased at a significantly slower rate compared to females. Confirmation of sex differences in maternal investment among Taï chimpanzees, demonstrates the viability of using isotope analysis to investigate weaning in non-human primates. Additionally, assuming that behaviors observed in the Taï chimpanzees are illustrative of the ancestral pattern, our results provide a platform to enable the trajectory of weaning in human evolution to be further explored. Copyright © 2013 Wiley Periodicals, Inc.
Myers, Simon; Hellenthal, Garrett; Nerrienet, Eric; Bontrop, Ronald E.; Freeman, Colin; Donnelly, Peter; Mundy, Nicholas I.
2012-01-01
In spite of its evolutionary significance and conservation importance, the population structure of the common chimpanzee, Pan troglodytes, is still poorly understood. An issue of particular controversy is whether the proposed fourth subspecies of chimpanzee, Pan troglodytes ellioti, from parts of Nigeria and Cameroon, is genetically distinct. Although modern high-throughput SNP genotyping has had a major impact on our understanding of human population structure and demographic history, its application to ecological, demographic, or conservation questions in non-human species has been extremely limited. Here we apply these tools to chimpanzee population structure, using ∼700 autosomal SNPs derived from chimpanzee genomic data and a further ∼100 SNPs from targeted re-sequencing. We demonstrate conclusively the existence of P. t. ellioti as a genetically distinct subgroup. We show that there is clear differentiation between the verus, troglodytes, and ellioti populations at the SNP and haplotype level, on a scale that is greater than that separating continental human populations. Further, we show that only a small set of SNPs (10–20) is needed to successfully assign individuals to these populations. Tellingly, use of only mitochondrial DNA variation to classify individuals is erroneous in 4 of 54 cases, reinforcing the dangers of basing demographic inference on a single locus and implying that the demographic history of the species is more complicated than that suggested analyses based solely on mtDNA. In this study we demonstrate the feasibility of developing economical and robust tests of individual chimpanzee origin as well as in-depth studies of population structure. These findings have important implications for conservation strategies and our understanding of the evolution of chimpanzees. They also act as a proof-of-principle for the use of cheap high-throughput genomic methods for ecological questions. PMID:22396655
Receiving Post-Conflict Affiliation from the Enemy's Friend Reconciles Former Opponents
Wittig, Roman M.; Boesch, Christophe
2010-01-01
The adaptive function of bystander initiated post-conflict affiliation (also: consolation & appeasement) has been debated for 30 years. Three influential hypotheses compete for the most likely explanation but have not previously been tested with a single data set. The consolation hypothesis argues that bystander affiliation calms the victim and reduces their stress levels. The self-protection hypothesis proposes that a bystander offers affiliation to either opponent to protect himself from redirected aggression by this individual. The relationship-repair hypothesis suggests a bystander can substitute for a friend to reconcile the friend with the friend's former opponent. Here, we contrast all three hypotheses and tested their predictions with data on wild chimpanzees (Pan troglodytes verus) of the Taï National Park, Côte d'Ivoire. We examined the first and second post-conflict interactions with respect to both the dyadic and triadic relationships between the bystander and the two opponents. Results showed that female bystanders offered affiliation to their aggressor friends and the victims of their friends, while male bystanders offered affiliation to their victim friends and the aggressors of their friends. For both sexes, bystander affiliation resulted in a subsequent interaction pattern that is expected for direct reconciliation. Bystander affiliation offered to the opponent's friend was more likely to lead to affiliation among opponents in their subsequent interaction. Also, tolerance levels among former opponents were reset to normal levels. In conclusion, this study provides strong evidence for the relationship-repair hypothesis, moderate evidence for the consolation hypothesis and no evidence for the self-protection hypothesis. Furthermore, that bystanders can repair a relationship on behalf of their friend indicates that recipient chimpanzees are aware of the relationships between others, even when they are not kin. This presents a mechanism through which chimpanzees may gain benefits from social knowledge. PMID:21085592
Chimpanzees routinely fish for algae with tools during the dry season in Bakoun, Guinea.
Boesch, Christophe; Kalan, Ammie K; Agbor, Anthony; Arandjelovic, Mimi; Dieguez, Paula; Lapeyre, Vincent; Kühl, Hjalmar S
2017-03-01
Wild chimpanzees regularly use tools, made from sticks, leaves, or stone, to find flexible solutions to the ecological challenges of their environment. Nevertheless, some studies suggest strong limitations in the tool-using capabilities of chimpanzees. In this context, we present the discovery of a newly observed tool-use behavior in a population of chimpanzees (Pan troglodytes verus) living in the Bakoun Classified Forest, Guinea, where a temporary research site was established for 15 months. Bakoun chimpanzees of every age-sex class were observed to fish for freshwater green algae, Spirogrya sp., from rivers, streams, and ponds using long sticks and twigs, ranging from 9 cm up to 4.31 m in length. Using remote camera trap footage from 11 different algae fishing sites within an 85-km 2 study area, we found that algae fishing occurred frequently during the dry season and was non-existent during the rainy season. Chimpanzees were observed algae fishing for as little as 1 min to just over an hour, with an average duration of 9.09 min. We estimate that 364 g of Spirogyra algae could be retrieved in this time, based on human trials in the field. Only one other chimpanzee population living in Bossou, Guinea, has been described to customarily scoop algae from the surface of the water using primarily herbaceous tools. Here, we describe the new behavior found at Bakoun and compare it to the algae scooping observed in Bossou chimpanzees and the occasional variant reported in Odzala, Republic of the Congo. As these algae are reported to be high in protein, carbohydrates, and minerals, we hypothesize that chimpanzees are obtaining a nutritional benefit from this seasonally available resource. © 2016 Wiley Periodicals, Inc.
Bowden, Rory; MacFie, Tammie S; Myers, Simon; Hellenthal, Garrett; Nerrienet, Eric; Bontrop, Ronald E; Freeman, Colin; Donnelly, Peter; Mundy, Nicholas I
2012-01-01
In spite of its evolutionary significance and conservation importance, the population structure of the common chimpanzee, Pan troglodytes, is still poorly understood. An issue of particular controversy is whether the proposed fourth subspecies of chimpanzee, Pan troglodytes ellioti, from parts of Nigeria and Cameroon, is genetically distinct. Although modern high-throughput SNP genotyping has had a major impact on our understanding of human population structure and demographic history, its application to ecological, demographic, or conservation questions in non-human species has been extremely limited. Here we apply these tools to chimpanzee population structure, using ∼700 autosomal SNPs derived from chimpanzee genomic data and a further ∼100 SNPs from targeted re-sequencing. We demonstrate conclusively the existence of P. t. ellioti as a genetically distinct subgroup. We show that there is clear differentiation between the verus, troglodytes, and ellioti populations at the SNP and haplotype level, on a scale that is greater than that separating continental human populations. Further, we show that only a small set of SNPs (10-20) is needed to successfully assign individuals to these populations. Tellingly, use of only mitochondrial DNA variation to classify individuals is erroneous in 4 of 54 cases, reinforcing the dangers of basing demographic inference on a single locus and implying that the demographic history of the species is more complicated than that suggested analyses based solely on mtDNA. In this study we demonstrate the feasibility of developing economical and robust tests of individual chimpanzee origin as well as in-depth studies of population structure. These findings have important implications for conservation strategies and our understanding of the evolution of chimpanzees. They also act as a proof-of-principle for the use of cheap high-throughput genomic methods for ecological questions.
NASA Astrophysics Data System (ADS)
Soares, Emílio Alberto Amaral; D'Apolito, Carlos; Jaramillo, Carlos; Harrington, Guy; Caputo, Mario Vicente; Barbosa, Rogério Oliveira; Bonora dos Santos, Eneas; Dino, Rodolfo; Gonçalves, Alexandra Dias
2017-11-01
The Amazonas fluvial system originates in the Andes and runs ca. 6700 km to the Atlantic Ocean, having as the main affluent the Negro River (second largest in water volume). The Amazonas transcontinental system has been dated to the late Miocene, but the timing of origin and evolutionary processes of its tributaries are still poorly understood. Negro River alluvial deposits have been dated to the middle to late Pleistocene. Recently, we studied a number of boreholes drilled for the building of a bridge at the lower course of the Negro River. A thin (centimetric) sedimentary deposit was found, laterally continuous for about 1800 m, unconformably overlaying middle Miocene strata and unconformably overlain by younger Quaternary deposits. This deposit consists predominantly of brownish-gray sandstones cemented by siderite and with subordinate mudstone and conglomerate beds. Palynological, granulometric, textural and mineralogical data suggest that the initial Negro River aggradation took place in the deep incised valley under anoxic conditions and subsequently along the floodplain, with efficient transport of mixed origin particles (Andean and Amazonic). Angiosperm leaves, wood and pollen are indicative of a tropical continental palaeoenvironment. A well preserved palynoflora that includes Alnipollenites verus, Grimsdalea magnaclavata and Paleosantalaceaepites cingulatus suggests a late Pliocene to early Pleistocene (Piacenzian to Gelasian) age for this unit, which was an age yet unrecorded in the Amazon Basin. These results indicate that by the late Pliocene-early Pleistocene, large scale river activity was occurring in Central Amazonia linking this region with the Andean headwaters, and therefore incompatible with Central Amazonia barriers like the Purus arch.
Rosche, C; Hensen, I; Lachmuth, S
2018-01-01
Primary colonisation in invasive ranges most commonly occurs in disturbed habitats, where anthropogenic disturbance may cause physical damage to plants. The tolerance to such damage may differ between cytotypes and among populations as a result of differing population histories (adaptive differentiation between ruderal verus natural habitats). Moreover, founder populations often experience inbreeding depression, the effects of which may increase through physical damage due to inbreeding-environment interactions. We aimed to understand how such colonisation processes differ between diploid and tetraploid Centaurea stoebe populations, with a view to understanding why only tetraploids are invasive. We conducted a clipping experiment (frequency: zero, once or twice in the growing season) on inbred versus outbred offspring originating from 37 C. stoebe populations of varying cytotype, range and habitat type (natural versus ruderal). Aboveground biomass was harvested at the end of the vegetation period, while re-sprouting success was recorded in the following spring. Clipping reduced re-sprouting success and biomass, which was significantly more pronounced in natural than in ruderal populations. Inbreeding depression was not detected under benign conditions, but became increasingly apparent in biomass when plants were clipped. The effects of clipping and inbreeding did not differ between cytotypes. Adaptive differentiation in disturbance tolerance was higher among populations than between cytotypes, which highlights the potential of pre-adaptation in ruderal populations during early colonisation on anthropogenically disturbed sites. While the consequences of inbreeding increased through clipping-mediated stress, they were comparable between cytotypes, and consequently do not contribute to understanding the cytotype shift in the invasive range. © 2017 German Society for Plant Sciences and The Royal Botanical Society of the Netherlands.
Sawers, Andrew; Ting, Lena H
2015-02-01
The ability to quantify differences in walking balance proficiency is critical to curbing the rising health and financial costs of falls. Current laboratory-based approaches typically focus on successful recovery of balance while clinical instruments often pose little difficulty for all but the most impaired patients. Rarely do they test motor behaviors of sufficient difficulty to evoke failures in balance control limiting their ability to quantify balance proficiency. Our objective was to test whether a simple beam-walking task could quantify differences in walking balance proficiency across a range of sensorimotor abilities. Ten experts, ten novices, and five individuals with transtibial limb loss performed six walking trials across three different width beams. Walking balance proficiency was quantified as the ratio of distance walked to total possible distance. Balance proficiency was not significantly different between cohorts on the wide-beam, but clear differences between cohorts on the mid and narrow-beams were identified. Experts walked a greater distance than novices on the mid-beam (average of 3.63±0.04m verus 2.70±0.21m out of 3.66m; p=0.009), and novices walked further than amputees (1.52±0.20m; p=0.03). Amputees were unable to walk on the narrow-beam, while experts walked further (3.07±0.14m) than novices (1.55±0.26m; p=0.0005). A simple beam-walking task and an easily collected measure of distance traveled detected differences in walking balance proficiency across sensorimotor abilities. This approach provides a means to safely study and evaluate successes and failures in walking balance in the clinic or lab. It may prove useful in identifying mechanisms underlying falls versus fall recoveries. Copyright © 2015 Elsevier B.V. All rights reserved.
Pruetz, Jill D
2018-02-08
I report on the nocturnal behavior of Fongoli chimpanzees in a savanna mosaic during different seasons and lunar phases and test the hypothesis that hot daytime temperatures influence activity at night. I predicted that apes would be more active at night during periods of greater lunar illuminosity given diurnal primates' lack of visual specializations for low-light conditions and in dry season months when water scarcity exacerbated heat stress. I observed chimpanzees for 403 hrs on 40 nights between 2007 and 2013 and categorized their activity as social, movement, or vocalization. I scored their activity as occurring after moonrise or before moonset and considered the influence of moon phase (fuller versus darker phases) as well as season on chimpanzee nocturnal behavior in the analyses. Results indicate that apes were more active after moonrise or before moonset during fuller moon phases in the dry season but not the wet season. Most night-time activity involved movement (travel or forage), followed by social behavior, and long-distance vocal communication. Animals in highly seasonal habitats often exhibit thermoregulatory adaptations but, like other primates, chimpanzees lack physiological mechanisms to combat thermal stress. This study provides evidence that they may exhibit behaviors that allow them to avoid high temperatures in a savanna environment, such as feeding and socializing at night during the hottest time of year and in the brightest moon phases. The results support theories invoking thermal stress as a selective pressure for hominins in open environments where heat would constrain temporal foraging niches, and suggest an adaptability of sleeping patterns in response to external factors. © 2018 Wiley Periodicals, Inc.
Fünfstück, Tillmann; Arandjelovic, Mimi; Morgan, David B; Sanz, Crickette; Reed, Patricia; Olson, Sarah H; Cameron, Ken; Ondzie, Alain; Peeters, Martine; Vigilant, Linda
2015-02-01
Populations of an organism living in marked geographical or evolutionary isolation from other populations of the same species are often termed subspecies and expected to show some degree of genetic distinctiveness. The common chimpanzee (Pan troglodytes) is currently described as four geographically delimited subspecies: the western (P. t. verus), the nigerian-cameroonian (P. t. ellioti), the central (P. t. troglodytes) and the eastern (P. t. schweinfurthii) chimpanzees. Although these taxa would be expected to be reciprocally monophyletic, studies have not always consistently resolved the central and eastern chimpanzee taxa. Most studies, however, used data from individuals of unknown or approximate geographic provenance. Thus, genetic data from samples of known origin may shed light on the evolutionary relationship of these subspecies. We generated microsatellite genotypes from noninvasively collected fecal samples of 185 central chimpanzees that were sampled across large parts of their range and analyzed them together with 283 published eastern chimpanzee genotypes from known localities. We observed a clear signal of isolation by distance across both subspecies. Further, we found that a large proportion of comparisons between groups taken from the same subspecies showed higher genetic differentiation than the least differentiated between-subspecies comparison. This proportion decreased substantially when we simulated a more clumped sampling scheme by including fewer groups. Our results support the general concept that the distribution of the sampled individuals can dramatically affect the inference of genetic population structure. With regard to chimpanzees, our results emphasize the close relationship of equatorial chimpanzees from central and eastern equatorial Africa and the difficult nature of subspecies definitions. Copyright © 2014 Wiley Periodicals, Inc.
"Missing perikymata"--fact or fiction? A study on chimpanzee (Pan troglodytes verus) canines.
Kierdorf, Horst; Witzel, Carsten; Kierdorf, Uwe; Skinner, Matthew M; Skinner, Mark F
2015-06-01
Recently, a lower than expected number of perikymata between repetitive furrow-type hypoplastic defects has been reported in chimpanzee canines from the Fongoli site, Senegal (Skinner and Pruetz: Am J Phys Anthropol 149 (2012) 468-482). Based on an observation in a localized enamel fracture surface of a canine of a chimpanzee from the Taï Forest (Ivory Coast), these authors inferred that a nonemergence of striae of Retzius could be the cause for the "missing perikymata" phenomenon in the Fongoli chimpanzees. To check this inference, we analyzed the structure of outer enamel in three chimpanzee canines. The teeth were studied using light-microscopic and scanning-electron microscopic techniques. Our analysis of the specimen upon which Skinner and Pruetz (Am J Phys Anthropol 149 (2012) 468-482) had made their original observation does not support their hypothesis. We demonstrate that the enamel morphology described by them is not caused by a nonemergence of striae of Retzius but can be attributed to structural variations in outer enamel that result in a differential fracture behavior. Although rejecting the presumed existence of nonemergent striae of Retzius, our study provided evidence that, in furrow-type hypoplastic defects, a pronounced tapering of Retzius increments can occur, with the striae of Retzius forming acute angles with the outer enamel surface. We suggest that in such cases the outcrop of some striae of Retzius is essentially unobservable at the enamel surface, causing too low perikymata counts. The pronounced tapering of Retzius increments in outer enamel presumably reflects a mild to moderate disturbance of the function of late secretory ameloblasts. © 2015 Wiley Periodicals, Inc.
Forest chimpanzees (Pan troglodytes verus) remember the location of numerous fruit trees.
Normand, Emmanuelle; Ban, Simone Dagui; Boesch, Christophe
2009-11-01
It is assumed that spatial memory contributes crucially to animal cognition since animals' habitats entail a large number of dispersed and unpredictable food sources. Spatial memory has been investigated under controlled conditions, with different species showing and different conditions leading to varying performance levels. However, the number of food sources investigated is very low compared to what exists under natural conditions, where food resources are so abundant that it is difficult to precisely identify what is available. By using a detailed botanical map containing over 12,499 trees known to be used by the Taï chimpanzees, we created virtual maps of all productive fruit trees to simulate potential strategies used by wild chimpanzees to reach resources without spatial memory. First, we simulated different assumptions concerning the chimpanzees' preference for a particular tree species, and, second, we varied the detection field to control for the possible use of smell to detect fruiting trees. For all these assumptions, we compared simulated distance travelled, frequencies of trees visited, and revisit rates with what we actually observed in wild chimpanzees. Our results show that chimpanzees visit rare tree species more frequently, travel shorter distances to reach them, and revisit the same trees more often than if they had no spatial memory. In addition, we demonstrate that chimpanzees travel longer distances to reach resources where they will eat for longer periods of time, and revisit resources more frequently where they ate for a long period of time during their first visit. Therefore, this study shows that forest chimpanzees possess a precise spatial memory which allows them to remember the location of numerous resources and use this information to select the most attractive resources.
Elements of complexity in subsurface modeling, exemplified with three case studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freedman, Vicky L.; Truex, Michael J.; Rockhold, Mark
2017-04-03
There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this paper, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: 1) modeling approach, 2) description of process, andmore » 3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil vapor extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.« less
Elements of complexity in subsurface modeling, exemplified with three case studies
NASA Astrophysics Data System (ADS)
Freedman, Vicky L.; Truex, Michael J.; Rockhold, Mark L.; Bacon, Diana H.; Freshley, Mark D.; Wellman, Dawn M.
2017-09-01
There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this report, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: (1) modeling approach, (2) description of process, and (3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil-vapor-extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.
Hydrological model parameter dimensionality is a weak measure of prediction uncertainty
NASA Astrophysics Data System (ADS)
Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.
2015-04-01
This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.
40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission model...
40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.
Code of Federal Regulations, 2013 CFR
2013-07-01
... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission model...
40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission model...
40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.
Code of Federal Regulations, 2012 CFR
2012-07-01
... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission model...
Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.
Islam, R; Weir, C; Del Fiol, G
2016-01-01
Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.
Wenchi Jin; Hong S. He; Frank R. Thompson
2016-01-01
Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...
A Systematic Review of Conceptual Frameworks of Medical Complexity and New Model Development.
Zullig, Leah L; Whitson, Heather E; Hastings, Susan N; Beadles, Chris; Kravchenko, Julia; Akushevich, Igor; Maciejewski, Matthew L
2016-03-01
Patient complexity is often operationalized by counting multiple chronic conditions (MCC) without considering contextual factors that can affect patient risk for adverse outcomes. Our objective was to develop a conceptual model of complexity addressing gaps identified in a review of published conceptual models. We searched for English-language MEDLINE papers published between 1 January 2004 and 16 January 2014. Two reviewers independently evaluated abstracts and all authors contributed to the development of the conceptual model in an iterative process. From 1606 identified abstracts, six conceptual models were selected. One additional model was identified through reference review. Each model had strengths, but several constructs were not fully considered: 1) contextual factors; 2) dynamics of complexity; 3) patients' preferences; 4) acute health shocks; and 5) resilience. Our Cycle of Complexity model illustrates relationships between acute shocks and medical events, healthcare access and utilization, workload and capacity, and patient preferences in the context of interpersonal, organizational, and community factors. This model may inform studies on the etiology of and changes in complexity, the relationship between complexity and patient outcomes, and intervention development to improve modifiable elements of complex patients.
What do we gain from simplicity versus complexity in species distribution models?
Merow, Cory; Smith, Matthew J.; Edwards, Thomas C.; Guisan, Antoine; McMahon, Sean M.; Normand, Signe; Thuiller, Wilfried; Wuest, Rafael O.; Zimmermann, Niklaus E.; Elith, Jane
2014-01-01
Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence–environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence–environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building ‘under fit’ models, having insufficient flexibility to describe observed occurrence–environment relationships, we risk misunderstanding the factors shaping species distributions. By building ‘over fit’ models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.
Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.
Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622
A Novel BA Complex Network Model on Color Template Matching
Han, Risheng; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235
A novel BA complex network model on color template matching.
Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.
Application of 3D Laser Scanning Technology in Complex Rock Foundation Design
NASA Astrophysics Data System (ADS)
Junjie, Ma; Dan, Lu; Zhilong, Liu
2017-12-01
Taking the complex landform of Tanxi Mountain Landscape Bridge as an example, the application of 3D laser scanning technology in the mapping of complex rock foundations is studied in this paper. A set of 3D laser scanning technologies are formed and several key engineering problems are solved. The first is 3D laser scanning technology of complex landforms. 3D laser scanning technology is used to obtain a complete 3D point cloud data model of the complex landform. The detailed and accurate results of the surveying and mapping decrease the measuring time and supplementary measuring times. The second is 3D collaborative modeling of the complex landform. A 3D model of the complex landform is established based on the 3D point cloud data model. The super-structural foundation model is introduced for 3D collaborative design. The optimal design plan is selected and the construction progress is accelerated. And the last is finite-element analysis technology of the complex landform foundation. A 3D model of the complex landform is introduced into ANSYS for building a finite element model to calculate anti-slide stability of the rock, and provides a basis for the landform foundation design and construction.
NASA Astrophysics Data System (ADS)
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
Teacher Modeling Using Complex Informational Texts
ERIC Educational Resources Information Center
Fisher, Douglas; Frey, Nancy
2015-01-01
Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.
Male-Mediated Gene Flow in Patrilocal Primates
Schubert, Grit; Stoneking, Colin J.; Arandjelovic, Mimi; Boesch, Christophe; Eckhardt, Nadin; Hohmann, Gottfried; Langergraber, Kevin; Lukas, Dieter; Vigilant, Linda
2011-01-01
Background Many group–living species display strong sex biases in dispersal tendencies. However, gene flow mediated by apparently philopatric sex may still occur and potentially alters population structure. In our closest living evolutionary relatives, dispersal of adult males seems to be precluded by high levels of territoriality between males of different groups in chimpanzees, and has only been observed once in bonobos. Still, male–mediated gene flow might occur through rare events such as extra–group matings leading to extra–group paternity (EGP) and female secondary dispersal with offspring, but the extent of this gene flow has not yet been assessed. Methodology/Principal Findings Using autosomal microsatellite genotyping of samples from multiple groups of wild western chimpanzees (Pan troglodytes verus) and bonobos (Pan paniscus), we found low genetic differentiation among groups for both males and females. Characterization of Y–chromosome microsatellites revealed levels of genetic differentiation between groups in bonobos almost as high as those reported previously in eastern chimpanzees, but lower levels of differentiation in western chimpanzees. By using simulations to evaluate the patterns of Y–chromosomal variation expected under realistic assumptions of group size, mutation rate and reproductive skew, we demonstrate that the observed presence of multiple and highly divergent Y–haplotypes within western chimpanzee and bonobo groups is best explained by successful male–mediated gene flow. Conclusions/Significance The similarity of inferred rates of male–mediated gene flow and published rates of EGP in western chimpanzees suggests this is the most likely mechanism of male–mediated gene flow in this subspecies. In bonobos more data are needed to refine the estimated rate of gene flow. Our findings suggest that dispersal patterns in these closely related species, and particularly for the chimpanzee subspecies, are more variable than previously appreciated. This is consistent with growing recognition of extensive behavioral variation in chimpanzees and bonobos. PMID:21747938
Cohen, Mitchell D; Sisco, Maureen; Baker, Kathy; Bowser, Darlene; Chen, Lung-Chi; Schlesinger, Richard B
2003-01-10
A health hazard to welders is development of lung cancer. It is believed that this is likely due, in part, to the presence in welding fumes of several hexavalent chromium (Cr[VI]) species, whose solubility depends primarily on which process (i.e., manual metal arc verus metal-inert gas) is used. However, inhalation of Cr alone is uncommon in this setting. Thus, an examination of potential contributions from other coinhalants in creating or enhancing conditions whereby inhaled fume-associated Cr (primarily the insoluble forms) may initiate cancer is critical to increasing our understanding and preventing this particular occupational disease. One major chemical species formed and released during welding is ozone (O3). Though implications of adverse pulmonary effects from individual exposure to Cr or O3 have been investigated, those from simultaneous exposure are unclear. To begin to address whether the carcinogenic potential of insoluble Cr[VI] agents might be enhanced in hosts inhaling mixtures of Cr and O3 versus Cr alone, analyses of total lung Cr burden, Cr retention in lung epithelium and interstitium, and potential shifts in lung cell distribution of Cr from the cytoplasm to nuclei were undertaken in F-344 rats exposed nose-only (5 h/d, 5 d/wk for up to 48 wk) to an extrapolated occupationally relevant level of Cr (360 micrograms Cr/m3 as calcium chromate) alone and in combination with 0.3 ppm O3. Overall, there was only a nominal effect from O3 on Cr retention or on distribution of Cr particles among extracellular sites and within lung cells. However, there were O3-related effects upon mechanisms for clearing the Cr from the deep lung, specifically at the levels of particle uptake and postphagocytic/endocytic processing by macrophages. This O3 exposure-related shift in normal pulmonary clearance might potentially increase the health risk in workers exposed to other insoluble or poorly soluble carcinogenic Cr compounds.
Bryson-Morrison, Nicola; Matsuzawa, Tetsuro; Humle, Tatyana
2016-12-01
Many primate populations occur outside protected areas in fragmented anthropogenic landscapes. Empirical data on the ecological characteristics that define an anthropogenic landscape are urgently required if conservation initiatives in such environments are to succeed. The main objective of our study was to determine the composition and availability of chimpanzee (Pan troglodytes verus) food resources across fine spatial scales in the anthropogenic landscape of Bossou, Guinea, West Africa. We examined food resources in all habitat types available in the chimpanzees' core area. We surveyed resource composition, structure and heterogeneity (20 m × 20 m quadrats, N = 54) and assessed temporal availability of food from phenology trails (total distance 5951 m; 1073 individual trees) over 1 year (2012-2013). Over half of Bossou consists of regenerating forest and is highly diverse in terms of chimpanzee food species; large fruit bearing trees are rare and confined to primary and riverine forest. Moraceae (mulberries and figs) was the dominant family, trees of which produce drupaceous fruits favored by chimpanzees. The oil palm occurs at high densities throughout and is the only species found in all habitat types except primary forest. Our data suggest that the high densities of oil palm and fig trees, along with abundant terrestrial herbaceous vegetation and cultivars, are able to provide the chimpanzees with widely available resources, compensating for the scarcity of large fruit trees. A significant difference was found between habitat types in stem density/ha and basal area m 2 /ha of chimpanzee food species. Secondary, young secondary, and primary forest emerged as the most important habitat types for availability of food tree species. Our study emphasizes the importance of examining ecological characteristics of an anthropogenic landscape as each available habitat type is unlikely to be equally important in terms of spatial and temporal availability of resources. Am. J. Primatol. 78:1237-1249, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Osterås, Olav; Whist, Anne Cathrine; Sølverød, Liv
2008-02-01
Milk culture results at approximately 6 d post calving were assessed in a 2-year retrospective single-cohort study in 178 Norwegian herds. A combined teat dipping and selective antibiotic therapy trial was performed in these herds where cows with composite milk somatic cell count (CMSCC) >100,000 cells/ml before drying-off (geometric mean of the last three CMSCC test-days) and isolation of Staphylococcus aureus or Streptococcus dysgalactiae were selected for either short-acting lactation antibiotic treatment or long-acting dry cow antibiotic treatment. Milk culture results at approximately 6 d post-calving were available from 437 treated cows and 3061 non-treated cows before drying-off and separate multivariable logistic regression models were ran for these two groups. Risk factors associated with isolation of Staph. aureus 6 d post calving for non-treated cows were CMSCC >400,000 cells/ml before drying-off v. <400,000 cells/ml (Odd ratio (OR) = 2.4) and clinical mastitis (CM) in the previous lactation v . non-treated (OR=1.5). Risk factors associated with Staph. aureus 6 d post calving for treated cows was a CMS > 200,000 cells/ml before drying-off v. <200,000 cells/ml (OR=2.3) and CM in the previous lactation verus non-treated (OR=1.7). For non-treated cows it was 1.7 times more likely to isolate Str. dysgalactiae 6 d post-calving if the CMSCC was > 50,000 cells/ml compared with <50,000 cells/ml. For treated cows it was 3.7-5.8-times more likely to isolate Str. dysgalactiae 6 d post calving if given short-acting lactation formula at quarter level compared with long-acting dry cow formula used at cow level. Regular use of iodine post-milking teat disinfection (PMTD) did not influence the isolation of Staph. aureus 6 d post calvin, but it was less likely to isolate Str. dysgalactiae 6 d post calving if iodine PMTD was used regularly rather than irregularly. The external teat sealant had no effect on either of the two bacteria. This study indicates that the CMSCC limit for sampling cows before drying-off can be reduced to 50,000 cells/ml in herds with a Str. dysgalactiae problem. Iodine PMTD should also be recommended in these herds. Cows with a CMSCC > 400,000 cells/ml prior to drying-off should receive long-acting dry cow formula irrespective of the milk result.
Dong, Yadong; Sun, Yongqi; Qin, Chao
2018-01-01
The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.
NASA Astrophysics Data System (ADS)
Faucci, Maria Teresa; Melani, Fabrizio; Mura, Paola
2002-06-01
Molecular modeling was used to investigate factors influencing complex formation between cyclodextrins and guest molecules and predict their stability through a theoretical model based on the search for a correlation between experimental stability constants ( Ks) and some theoretical parameters describing complexation (docking energy, host-guest contact surfaces, intermolecular interaction fields) calculated from complex structures at a minimum conformational energy, obtained through stochastic methods based on molecular dynamic simulations. Naproxen, ibuprofen, ketoprofen and ibuproxam were used as model drug molecules. Multiple Regression Analysis allowed identification of the significant factors for the complex stability. A mathematical model ( r=0.897) related log Ks with complex docking energy and lipophilic molecular fields of cyclodextrin and drug.
Research on complex 3D tree modeling based on L-system
NASA Astrophysics Data System (ADS)
Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li
2018-03-01
L-system as a fractal iterative system could simulate complex geometric patterns. Based on the field observation data of trees and knowledge of forestry experts, this paper extracted modeling constraint rules and obtained an L-system rules set. Using the self-developed L-system modeling software the L-system rule set was parsed to generate complex tree 3d models.The results showed that the geometrical modeling method based on l-system could be used to describe the morphological structure of complex trees and generate 3D tree models.
NASA Astrophysics Data System (ADS)
Takemura, Kazuhiro; Guo, Hao; Sakuraba, Shun; Matubayasi, Nobuyuki; Kitao, Akio
2012-12-01
We propose a method to evaluate binding free energy differences among distinct protein-protein complex model structures through all-atom molecular dynamics simulations in explicit water using the solution theory in the energy representation. Complex model structures are generated from a pair of monomeric structures using the rigid-body docking program ZDOCK. After structure refinement by side chain optimization and all-atom molecular dynamics simulations in explicit water, complex models are evaluated based on the sum of their conformational and solvation free energies, the latter calculated from the energy distribution functions obtained from relatively short molecular dynamics simulations of the complex in water and of pure water based on the solution theory in the energy representation. We examined protein-protein complex model structures of two protein-protein complex systems, bovine trypsin/CMTI-1 squash inhibitor (PDB ID: 1PPE) and RNase SA/barstar (PDB ID: 1AY7), for which both complex and monomer structures were determined experimentally. For each system, we calculated the energies for the crystal complex structure and twelve generated model structures including the model most similar to the crystal structure and very different from it. In both systems, the sum of the conformational and solvation free energies tended to be lower for the structure similar to the crystal. We concluded that our energy calculation method is useful for selecting low energy complex models similar to the crystal structure from among a set of generated models.
Takemura, Kazuhiro; Guo, Hao; Sakuraba, Shun; Matubayasi, Nobuyuki; Kitao, Akio
2012-12-07
We propose a method to evaluate binding free energy differences among distinct protein-protein complex model structures through all-atom molecular dynamics simulations in explicit water using the solution theory in the energy representation. Complex model structures are generated from a pair of monomeric structures using the rigid-body docking program ZDOCK. After structure refinement by side chain optimization and all-atom molecular dynamics simulations in explicit water, complex models are evaluated based on the sum of their conformational and solvation free energies, the latter calculated from the energy distribution functions obtained from relatively short molecular dynamics simulations of the complex in water and of pure water based on the solution theory in the energy representation. We examined protein-protein complex model structures of two protein-protein complex systems, bovine trypsin/CMTI-1 squash inhibitor (PDB ID: 1PPE) and RNase SA/barstar (PDB ID: 1AY7), for which both complex and monomer structures were determined experimentally. For each system, we calculated the energies for the crystal complex structure and twelve generated model structures including the model most similar to the crystal structure and very different from it. In both systems, the sum of the conformational and solvation free energies tended to be lower for the structure similar to the crystal. We concluded that our energy calculation method is useful for selecting low energy complex models similar to the crystal structure from among a set of generated models.
NASA Astrophysics Data System (ADS)
Henriot, abel; Blavoux, bernard; Travi, yves; Lachassagne, patrick; Beon, olivier; Dewandel, benoit; Ladouche, bernard
2013-04-01
The Evian Natural Mineral Water (NMW) aquifer is a highly heterogeneous Quaternary glacial deposits complex composed of three main units, from bottom to top: - The "Inferior Complex" mainly composed of basal and impermeable till lying on the Alpine rocks. It outcrops only at the higher altitudes but is known in depth through drilled holes. - The "Gavot Plateau Complex" is an interstratified complex of mainly basal and lateral till up to 400 m thick. It outcrops at heights above approximately 850 m a.m.s.l. and up to 1200 m a.m.s.l. over a 30 km² area. It is the main recharge area known for the hydromineral system. - The "Terminal Complex" from which the Evian NMW is emerging at 410 m a.m.s.l. It is composed of sand and gravel Kame terraces that allow water to flow from the deep "Gavot Plateau Complex" permeable layers to the "Terminal Complex". A thick and impermeable terminal till caps and seals the system. Aquifer is then confined at its downstream area. Because of heterogeneity and complexity of this hydrosystem, distributed modeling tools are difficult to implement at the whole system scale: important hypothesis would have to be made about geometry, hydraulic properties, boundary conditions for example and extrapolation would lead with no doubt to unacceptable errors. Consequently a modeling strategy is being developed and leads also to improve the conceptual model of the hydrosystem. Lumped models mainly based on tritium time series allow the whole hydrosystem to be modeled combining in series: an exponential model (superficial aquifers of the "Gavot Plateau Complex"), a dispersive model (Gavot Plateau interstratified complex) and a piston flow model (sand and gravel from the Kame terraces) respectively 8, 60 and 2.5 years of mean transit time. These models provide insight on the governing parameters for the whole mineral aquifer. They help improving the current conceptual model and are to be improved with other environmental tracers such as CFC, SF6. A deterministic approach (distributed model; flow and transport) is performed at the scale of the terminal complex. The geometry of the system is quite well known from drill holes and the aquifer properties from data processing of hydraulic heads and pumping tests interpretation. A multidisciplinary approach (hydrodynamic, hydrochemistry, geology, isotopes) for the recharge area (Gavot Plateau Complex) aims to provide better constraint for the upstream boundary of distributed model. More, perfect tracer modeling approach highly constrains fitting of this distributed model. The result is a high resolution conceptual model leading to a future operational management tool of the aquifer.
NASA Astrophysics Data System (ADS)
Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.
2016-12-01
Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.
Complex systems as lenses on learning and teaching
NASA Astrophysics Data System (ADS)
Hurford, Andrew C.
From metaphors to mathematized models, the complexity sciences are changing the ways disciplines view their worlds, and ideas borrowed from complexity are increasingly being used to structure conversations and guide research on teaching and learning. The purpose of this corpus of research is to further those conversations and to extend complex systems ideas, theories, and modeling to curricula and to research on learning and teaching. A review of the literatures of learning and of complexity science and a discussion of the intersections between those disciplines are provided. The work reported represents an evolving model of learning qua complex system and that evolution is the result of iterative cycles of design research. One of the signatures of complex systems is the presence of scale invariance and this line of research furnishes empirical evidence of scale invariant behaviors in the activity of learners engaged in participatory simulations. The offered discussion of possible causes for these behaviors and chaotic phase transitions in human learning favors real-time optimization of decision-making as the means for producing such behaviors. Beyond theoretical development and modeling, this work includes the development of teaching activities intended to introduce pre-service mathematics and science teachers to complex systems. While some of the learning goals for this activity focused on the introduction of complex systems as a content area, we also used complex systems to frame perspectives on learning. Results of scoring rubrics and interview responses from students illustrate attributes of the proposed model of complex systems learning and also how these pre-service teachers made sense of the ideas. Correlations between established theories of learning and a complex adaptive systems model of learning are established and made explicit, and a means for using complex systems ideas for designing instruction is offered. It is a fundamental assumption of this research and researcher that complex systems ideas and understandings can be appropriated from more complexity-developed disciplines and put to use modeling and building increasingly productive understandings of learning and teaching.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Philip LaRoche
At the end of his life, Stephen Jay Kline, longtime professor of mechanical engineering at Stanford University, completed a book on how to address complex systems. The title of the book is 'Conceptual Foundations of Multi-Disciplinary Thinking' (1995), but the topic of the book is systems. Kline first establishes certain limits that are characteristic of our conscious minds. Kline then establishes a complexity measure for systems and uses that complexity measure to develop a hierarchy of systems. Kline then argues that our minds, due to their characteristic limitations, are unable to model the complex systems in that hierarchy. Computers aremore » of no help to us here. Our attempts at modeling these complex systems are based on the way we successfully model some simple systems, in particular, 'inert, naturally-occurring' objects and processes, such as what is the focus of physics. But complex systems overwhelm such attempts. As a result, the best we can do in working with these complex systems is to use a heuristic, what Kline calls the 'Guideline for Complex Systems.' Kline documents the problems that have developed due to 'oversimple' system models and from the inappropriate application of a system model from one domain to another. One prominent such problem is the Procrustean attempt to make the disciplines that deal with complex systems be 'physics-like.' Physics deals with simple systems, not complex ones, using Kline's complexity measure. The models that physics has developed are inappropriate for complex systems. Kline documents a number of the wasteful and dangerous fallacies of this type.« less
On the dangers of model complexity without ecological justification in species distribution modeling
David M. Bell; Daniel R. Schlaepfer
2016-01-01
Although biogeographic patterns are the product of complex ecological processes, the increasing com-plexity of correlative species distribution models (SDMs) is not always motivated by ecological theory,but by model fit. The validity of model projections, such as shifts in a speciesâ climatic niche, becomesquestionable particularly during extrapolations, such as for...
NASA Astrophysics Data System (ADS)
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping points) in the face of environmental and anthropogenic change (Perz, Muñoz-Carpena, Kiker and Holt, 2013), and through MonteCarlo mapping potential management activities over the most important factors or processes to influence the system towards behavioral (desirable) outcomes (Chu-Agor, Muñoz-Carpena et al., 2012).
Everyday value conflicts and integrative complexity of thought.
Myyry, Liisa
2002-12-01
This study examined the value pluralism model in everyday value conflicts, and the effect of issue context on complexity of thought. According to the cognitive manager model we hypothesized that respondents would obtain a higher level of integrative complexity on personal issues that on professional and general issues. We also explored the relations of integrative complexity to value priorities, measured by the Schwartz Value Survey, and to emotional empathy. The value pluralism model was not supported by the data collected from 126 university students from social science, business and technology. The cognitive manager model was partially confirmed by data from females but not from males. Concerning value priorities, more complex respondents had higher regard for self-transcendence values, and less complex respondents for self-enhancement values Emotional empathy was also significantly related to complexity score.
Bessa, Joana; Frazão-Moreira, Amélia; Biro, Dora; Hockings, Kimberley Jane
2018-01-01
Background West African landscapes are largely characterised by complex agroforest mosaics. Although the West African forests are considered a nonhuman primate hotspot, knowledge on the distribution of many species is often lacking and out-of-date. Considering the fast-changing nature of the landscapes in this region, up-to-date information on primate occurrence is urgently needed, particularly of taxa such as colobines, which may be more sensitive to habitat modification than others. Understanding wildlife occurrence and mechanisms of persistence in these human-dominated landscapes is fundamental for developing effective conservation strategies. Methods In this paper, we aim to review current knowledge on the distribution of three threatened primates in Guinea-Bissau and neighbouring regions, highlighting research gaps and identifying priority research and conservation action. We conducted a systematic literature review of primate studies from 1976 to 2016 in Guinea-Bissau, southern Senegal and western Guinea (Boké Region). We mapped historical observation records of chimpanzee (Pan troglodytes verus), Temminck’s red colobus (Pilicolobus badius temminckii) and king colobus (Colobus polykomos), including our preliminary survey data from Dulombi, a newly established National Park (NP) in Guinea-Bissau. Results We found 151 documents, including 87 journal articles, that contained field data on primates in this region. In Guinea-Bissau, nearly all studies focussed south of the Corubal River, including mainly Cantanhez, Cufada, and Boé NP’s. In Senegal, most of the data came from Fongoli and Niokolo-Koba NP. In Boké (Guinea) studies are few, with the most recent data coming from Sangarédi. In Dulombi NP we recorded eight primate species, including chimpanzees, red colobus and king colobus. Across the selected region, chimpanzees, red colobus and king colobus were reported in eleven, twelve and seven protected areas, respectively. Discussion Our study demonstrates large geographical research gaps particularly for the two colobines. For the first time after more than two decades, we confirm the presence of red colobus and king colobus north of the Corubal River in Guinea-Bissau. The little information available from large parts of the red colobus range raises questions regarding levels of population fragmentation in this species, particularly in Casamance and across northern Guinea-Bissau. There are still no records demonstrating the occurrence of king colobus in Senegal, and the presence of a viable population in north-eastern Guinea-Bissau remains uncertain. While the occurrence of chimpanzees in Guinea-Bissau and Senegal is well documented, data from Boké (Guinea) are sparse and out-of-date. Our approach—the mapping of data gathered from a systematic literature review—allows us to provide recommendations for selecting future geographical survey locations and planning further research and conservation strategies in this region. PMID:29844988
Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.
2014-01-01
Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.
Challenges in Developing Models Describing Complex Soil Systems
NASA Astrophysics Data System (ADS)
Simunek, J.; Jacques, D.
2014-12-01
Quantitative mechanistic models that consider basic physical, mechanical, chemical, and biological processes have the potential to be powerful tools to integrate our understanding of complex soil systems, and the soil science community has often called for models that would include a large number of these diverse processes. However, once attempts have been made to develop such models, the response from the community has not always been overwhelming, especially after it discovered that these models are consequently highly complex, requiring not only a large number of parameters, not all of which can be easily (or at all) measured and/or identified, and which are often associated with large uncertainties, but also requiring from their users deep knowledge of all/most of these implemented physical, mechanical, chemical and biological processes. Real, or perceived, complexity of these models then discourages users from using them even for relatively simple applications, for which they would be perfectly adequate. Due to the nonlinear nature and chemical/biological complexity of the soil systems, it is also virtually impossible to verify these types of models analytically, raising doubts about their applicability. Code inter-comparisons, which is then likely the most suitable method to assess code capabilities and model performance, requires existence of multiple models of similar/overlapping capabilities, which may not always exist. It is thus a challenge not only to developed models describing complex soil systems, but also to persuade the soil science community in using them. As a result, complex quantitative mechanistic models are still an underutilized tool in soil science research. We will demonstrate some of the challenges discussed above on our own efforts in developing quantitative mechanistic models (such as HP1/2) for complex soil systems.
On the maximum-entropy/autoregressive modeling of time series
NASA Technical Reports Server (NTRS)
Chao, B. F.
1984-01-01
The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.
NASA Astrophysics Data System (ADS)
Zhang, Yali; Wang, Jun
2017-09-01
In an attempt to investigate the nonlinear complex evolution of financial dynamics, a new financial price model - the multitype range-intensity contact (MRIC) financial model, is developed based on the multitype range-intensity interacting contact system, in which the interaction and transmission of different types of investment attitudes in a stock market are simulated by viruses spreading. Two new random visibility graph (VG) based analyses and Lempel-Ziv complexity (LZC) are applied to study the complex behaviors of return time series and the corresponding random sorted series. The VG method is the complex network theory, and the LZC is a non-parametric measure of complexity reflecting the rate of new pattern generation of a series. In this work, the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, the numerical empirical study shows the similar complexity behaviors between the model and the real markets, the research confirms that the financial model is reasonable to some extent.
Rainfall runoff modelling of the Upper Ganga and Brahmaputra basins using PERSiST.
Futter, M N; Whitehead, P G; Sarkar, S; Rodda, H; Crossman, J
2015-06-01
There are ongoing discussions about the appropriate level of complexity and sources of uncertainty in rainfall runoff models. Simulations for operational hydrology, flood forecasting or nutrient transport all warrant different levels of complexity in the modelling approach. More complex model structures are appropriate for simulations of land-cover dependent nutrient transport while more parsimonious model structures may be adequate for runoff simulation. The appropriate level of complexity is also dependent on data availability. Here, we use PERSiST; a simple, semi-distributed dynamic rainfall-runoff modelling toolkit to simulate flows in the Upper Ganges and Brahmaputra rivers. We present two sets of simulations driven by single time series of daily precipitation and temperature using simple (A) and complex (B) model structures based on uniform and hydrochemically relevant land covers respectively. Models were compared based on ensembles of Bayesian Information Criterion (BIC) statistics. Equifinality was observed for parameters but not for model structures. Model performance was better for the more complex (B) structural representations than for parsimonious model structures. The results show that structural uncertainty is more important than parameter uncertainty. The ensembles of BIC statistics suggested that neither structural representation was preferable in a statistical sense. Simulations presented here confirm that relatively simple models with limited data requirements can be used to credibly simulate flows and water balance components needed for nutrient flux modelling in large, data-poor basins.
ERIC Educational Resources Information Center
Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka
2016-01-01
An important aspect of higher education is to educate students who can manage complex relationships and solve complex problems. Teachers need to be able to evaluate course content with regard to complexity, as well as evaluate students' ability to assimilate complex content and express it in the form of a learning outcome. One model for evaluating…
Multifaceted Modelling of Complex Business Enterprises
2015-01-01
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591
Multifaceted Modelling of Complex Business Enterprises.
Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David
2015-01-01
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.
Nonlinear complexity behaviors of agent-based 3D Potts financial dynamics with random environments
NASA Astrophysics Data System (ADS)
Xing, Yani; Wang, Jun
2018-02-01
A new microscopic 3D Potts interaction financial price model is established in this work, to investigate the nonlinear complexity behaviors of stock markets. 3D Potts model, which extends the 2D Potts model to three-dimensional, is a cubic lattice model to explain the interaction behavior among the agents. In order to explore the complexity of real financial markets and the 3D Potts financial model, a new random coarse-grained Lempel-Ziv complexity is proposed to certain series, such as the price returns, the price volatilities, and the random time d-returns. Then the composite multiscale entropy (CMSE) method is applied to the intrinsic mode functions (IMFs) and the corresponding shuffled data to study the complexity behaviors. The empirical results indicate that the 3D financial model is feasible.
Syntactic Complexity as an Aspect of Text Complexity
ERIC Educational Resources Information Center
Frantz, Roger S.; Starr, Laura E.; Bailey, Alison L.
2015-01-01
Students' ability to read complex texts is emphasized in the Common Core State Standards (CCSS) for English Language Arts and Literacy. The standards propose a three-part model for measuring text complexity. Although the model presents a robust means for determining text complexity based on a variety of features inherent to a text as well as…
Shippee, Nathan D; Shah, Nilay D; May, Carl R; Mair, Frances S; Montori, Victor M
2012-10-01
To design a functional, patient-centered model of patient complexity with practical applicability to analytic design and clinical practice. Existing literature on patient complexity has mainly identified its components descriptively and in isolation, lacking clarity as to their combined functions in disrupting care or to how complexity changes over time. The authors developed a cumulative complexity model, which integrates existing literature and emphasizes how clinical and social factors accumulate and interact to complicate patient care. A narrative literature review is used to explicate the model. The model emphasizes a core, patient-level mechanism whereby complicating factors impact care and outcomes: the balance between patient workload of demands and patient capacity to address demands. Workload encompasses the demands on the patient's time and energy, including demands of treatment, self-care, and life in general. Capacity concerns ability to handle work (e.g., functional morbidity, financial/social resources, literacy). Workload-capacity imbalances comprise the mechanism driving patient complexity. Treatment and illness burdens serve as feedback loops, linking negative outcomes to further imbalances, such that complexity may accumulate over time. With its components largely supported by existing literature, the model has implications for analytic design, clinical epidemiology, and clinical practice. Copyright © 2012 Elsevier Inc. All rights reserved.
Reassessing Geophysical Models of the Bushveld Complex in 3D
NASA Astrophysics Data System (ADS)
Cole, J.; Webb, S. J.; Finn, C.
2012-12-01
Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less dense, thicker crust underneath the Bushveld Complex necessitates the presence of dense material in the central area between the eastern and western lobes. The simplest way to achieve this is to model the mafic component of the Bushveld Complex as a single intrusion. This is similar to what the first students of the Bushveld Complex suggested. Conceptual models are by definition simplified versions of the real situation, and the geometry of the Bushveld Complex is expected to be much more intricate. References Cawthorn, R.G., Cooper, G.R.J., Webb, S.J. (1998). Connectivity between the western and eastern limbs of the Bushveld Complex. S Afr J Geol, 101, 291-298. Cousins, C.A. (1959). The structure of the mafic portion of the Bushveld Igneous Complex. Trans Geol Soc S Afr, 62, 179-189. Du Plessis, A., Kleywegt, R.J. (1987). A dipping sheet model for the mafic lobes of the Bushveld Complex. S Afr J Geol, 90, 1-6. Nguuri, T.K., Gore, J., James, D.E., Webb, S.J., Wright, C., Zengeni, T.G., Gwavava, O., Snoke, J.A. and Kaapvaal Seismic Group. (2001). Crustal structure beneath southern Africa and its implications for the formation and evolution of the Kaapvaal and Zimbabwe cratons. Geoph Res Lett, 28, 2501-2504. Webb, S.J., Cawthorn, R.G., Nguuri, T., James, D. (2004). Gravity modelling of Bushveld Complex connectivity supported by Southern African Seismic Experiment results, S Afr J Geol, 107, 207-218.
A simple model clarifies the complicated relationships of complex networks
Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi
2014-01-01
Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506
Pattern-oriented modeling of agent-based complex systems: Lessons from ecology
Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.
2005-01-01
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.
Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology
NASA Astrophysics Data System (ADS)
Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.
2005-11-01
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.
Epidemic threshold of the susceptible-infected-susceptible model on complex networks
NASA Astrophysics Data System (ADS)
Lee, Hyun Keun; Shim, Pyoung-Seop; Noh, Jae Dong
2013-06-01
We demonstrate that the susceptible-infected-susceptible (SIS) model on complex networks can have an inactive Griffiths phase characterized by a slow relaxation dynamics. It contrasts with the mean-field theoretical prediction that the SIS model on complex networks is active at any nonzero infection rate. The dynamic fluctuation of infected nodes, ignored in the mean field approach, is responsible for the inactive phase. It is proposed that the question whether the epidemic threshold of the SIS model on complex networks is zero or not can be resolved by the percolation threshold in a model where nodes are occupied in degree-descending order. Our arguments are supported by the numerical studies on scale-free network models.
Greek, Ray; Hansen, Lawrence A
2013-11-01
We surveyed the scientific literature regarding amyotrophic lateral sclerosis, the SOD1 mouse model, complex adaptive systems, evolution, drug development, animal models, and philosophy of science in an attempt to analyze the SOD1 mouse model of amyotrophic lateral sclerosis in the context of evolved complex adaptive systems. Humans and animals are examples of evolved complex adaptive systems. It is difficult to predict the outcome from perturbations to such systems because of the characteristics of complex systems. Modeling even one complex adaptive system in order to predict outcomes from perturbations is difficult. Predicting outcomes to one evolved complex adaptive system based on outcomes from a second, especially when the perturbation occurs at higher levels of organization, is even more problematic. Using animal models to predict human outcomes to perturbations such as disease and drugs should have a very low predictive value. We present empirical evidence confirming this and suggest a theory to explain this phenomenon. We analyze the SOD1 mouse model of amyotrophic lateral sclerosis in order to illustrate this position. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Modelling the evolution of complex conductivity during calcite precipitation on glass beads
NASA Astrophysics Data System (ADS)
Leroy, Philippe; Li, Shuai; Jougnot, Damien; Revil, André; Wu, Yuxin
2017-04-01
When pH and alkalinity increase, calcite frequently precipitates and hence modifies the petrophysical properties of porous media. The complex conductivity method can be used to directly monitor calcite precipitation in porous media because it is sensitive to the evolution of the mineralogy, pore structure and its connectivity. We have developed a mechanistic grain polarization model considering the electrochemical polarization of the Stern and diffuse layers surrounding calcite particles. Our complex conductivity model depends on the surface charge density of the Stern layer and on the electrical potential at the onset of the diffuse layer, which are computed using a basic Stern model of the calcite/water interface. The complex conductivity measurements of Wu et al. on a column packed with glass beads where calcite precipitation occurs are reproduced by our surface complexation and complex conductivity models. The evolution of the size and shape of calcite particles during the calcite precipitation experiment is estimated by our complex conductivity model. At the early stage of the calcite precipitation experiment, modelled particles sizes increase and calcite particles flatten with time because calcite crystals nucleate at the surface of glass beads and grow into larger calcite grains. At the later stage of the calcite precipitation experiment, modelled sizes and cementation exponents of calcite particles decrease with time because large calcite grains aggregate over multiple glass beads and only small calcite crystals polarize.
Musculoskeletal modelling of human ankle complex: Estimation of ankle joint moments.
Jamwal, Prashant K; Hussain, Shahid; Tsoi, Yun Ho; Ghayesh, Mergen H; Xie, Sheng Quan
2017-05-01
A musculoskeletal model for the ankle complex is vital in order to enhance the understanding of neuro-mechanical control of ankle motions, diagnose ankle disorders and assess subsequent treatments. Motions at the human ankle and foot, however, are complex due to simultaneous movements at the two joints namely, the ankle joint and the subtalar joint. The musculoskeletal elements at the ankle complex, such as ligaments, muscles and tendons, have intricate arrangements and exhibit transient and nonlinear behaviour. This paper develops a musculoskeletal model of the ankle complex considering the biaxial ankle structure. The model provides estimates of overall mechanical characteristics (motion and moments) of ankle complex through consideration of forces applied along ligaments and muscle-tendon units. The dynamics of the ankle complex and its surrounding ligaments and muscle-tendon units is modelled and formulated into a state space model to facilitate simulations. A graphical user interface is also developed during this research in order to include the visual anatomical information by converting it to quantitative information on coordinates. Validation of the ankle model was carried out by comparing its outputs with those published in literature as well as with experimental data obtained from an existing parallel ankle rehabilitation robot. Qualitative agreement was observed between the model and measured data for both, the passive and active ankle motions during trials in terms of displacements and moments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Impact of gastrectomy procedural complexity on surgical outcomes and hospital comparisons.
Mohanty, Sanjay; Paruch, Jennifer; Bilimoria, Karl Y; Cohen, Mark; Strong, Vivian E; Weber, Sharon M
2015-08-01
Most risk adjustment approaches adjust for patient comorbidities and the primary procedure. However, procedures done at the same time as the index case may increase operative risk and merit inclusion in adjustment models for fair hospital comparisons. Our objectives were to evaluate the impact of surgical complexity on postoperative outcomes and hospital comparisons in gastric cancer surgery. Patients who underwent gastric resection for cancer were identified from a large clinical dataset. Procedure complexity was characterized using secondary procedure CPT codes and work relative value units (RVUs). Regression models were developed to evaluate the association between complexity variables and outcomes. The impact of complexity adjustment on model performance and hospital comparisons was examined. Among 3,467 patients who underwent gastrectomy for adenocarcinoma, 2,171 operations were distal and 1,296 total. A secondary procedure was reported for 33% of distal gastrectomies and 59% of total gastrectomies. Six of 10 secondary procedures were associated with adverse outcomes. For example, patients who underwent a synchronous bowel resection had a higher risk of mortality (odds ratio [OR], 2.14; 95% CI, 1.07-4.29) and reoperation (OR, 2.09; 95% CI, 1.26-3.47). Model performance was slightly better for nearly all outcomes with complexity adjustment (mortality c-statistics: standard model, 0.853; secondary procedure model, 0.858; RVU model, 0.855). Hospital ranking did not change substantially after complexity adjustment. Surgical complexity variables are associated with adverse outcomes in gastrectomy, but complexity adjustment does not affect hospital rankings appreciably. Copyright © 2015 Elsevier Inc. All rights reserved.
Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach
Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...
Foundations for Streaming Model Transformations by Complex Event Processing.
Dávid, István; Ráth, István; Varró, Dániel
2018-01-01
Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.
The effects of numerical-model complexity and observation type on estimated porosity values
Starn, Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.
2015-01-01
The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a “complex” highly parameterized porosity field and a “simple” parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.
Application of surface complexation models to anion adsorption by natural materials
USDA-ARS?s Scientific Manuscript database
Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...
Modeling of protein binary complexes using structural mass spectrometry data
Kamal, J.K. Amisha; Chance, Mark R.
2008-01-01
In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints—positive and/or negative—in the docking step and are also used to decide the type of energy filter—electrostatics or desolvation—in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure. PMID:18042684
Drewes, Rich; Zou, Quan; Goodman, Philip H
2009-01-01
Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading "glue" tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS.
Drewes, Rich; Zou, Quan; Goodman, Philip H.
2008-01-01
Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading “glue” tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS. PMID:19506707
Stochastic simulation of multiscale complex systems with PISKaS: A rule-based approach.
Perez-Acle, Tomas; Fuenzalida, Ignacio; Martin, Alberto J M; Santibañez, Rodrigo; Avaria, Rodrigo; Bernardin, Alejandro; Bustos, Alvaro M; Garrido, Daniel; Dushoff, Jonathan; Liu, James H
2018-03-29
Computational simulation is a widely employed methodology to study the dynamic behavior of complex systems. Although common approaches are based either on ordinary differential equations or stochastic differential equations, these techniques make several assumptions which, when it comes to biological processes, could often lead to unrealistic models. Among others, model approaches based on differential equations entangle kinetics and causality, failing when complexity increases, separating knowledge from models, and assuming that the average behavior of the population encompasses any individual deviation. To overcome these limitations, simulations based on the Stochastic Simulation Algorithm (SSA) appear as a suitable approach to model complex biological systems. In this work, we review three different models executed in PISKaS: a rule-based framework to produce multiscale stochastic simulations of complex systems. These models span multiple time and spatial scales ranging from gene regulation up to Game Theory. In the first example, we describe a model of the core regulatory network of gene expression in Escherichia coli highlighting the continuous model improvement capacities of PISKaS. The second example describes a hypothetical outbreak of the Ebola virus occurring in a compartmentalized environment resembling cities and highways. Finally, in the last example, we illustrate a stochastic model for the prisoner's dilemma; a common approach from social sciences describing complex interactions involving trust within human populations. As whole, these models demonstrate the capabilities of PISKaS providing fertile scenarios where to explore the dynamics of complex systems. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
A multi-element cosmological model with a complex space-time topology
NASA Astrophysics Data System (ADS)
Kardashev, N. S.; Lipatova, L. N.; Novikov, I. D.; Shatskiy, A. A.
2015-02-01
Wormhole models with a complex topology having one entrance and two exits into the same space-time of another universe are considered, as well as models with two entrances from the same space-time and one exit to another universe. These models are used to build a model of a multi-sheeted universe (a multi-element model of the "Multiverse") with a complex topology. Spherical symmetry is assumed in all the models. A Reissner-Norström black-hole model having no singularity beyond the horizon is constructed. The strength of the central singularity of the black hole is analyzed.
Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G
2017-08-01
Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.
Acquisition of Complex Systemic Thinking: Mental Models of Evolution
ERIC Educational Resources Information Center
d'Apollonia, Sylvia T.; Charles, Elizabeth S.; Boyd, Gary M.
2004-01-01
We investigated the impact of introducing college students to complex adaptive systems on their subsequent mental models of evolution compared to those of students taught in the same manner but with no reference to complex systems. The students' mental models (derived from similarity ratings of 12 evolutionary terms using the pathfinder algorithm)…
Designing an Educational Game with Ten Steps to Complex Learning
ERIC Educational Resources Information Center
Enfield, Jacob
2012-01-01
Few instructional design (ID) models exist which are specific for developing educational games. Moreover, those extant ID models have not been rigorously evaluated. No ID models were found which focus on educational games with complex learning objectives. "Ten Steps to Complex Learning" (TSCL) is based on the four component instructional…
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736
Tuca, Albert; Gómez-Martínez, Mónica; Prat, Aleix
2018-01-01
Model of early palliative care (PC) integrated in oncology is based on shared care from the diagnosis to the end of life and is mainly focused on patients with greater complexity. However, there is no definition or tools to evaluate PC complexity. The objectives of the study were to identify the factors influencing level determination of complexity, propose predictive models, and build a complexity scale of PC. We performed a prospective, observational, multicenter study in a cohort of advanced cancer patients with an estimated prognosis ≤ 6 months. An ad hoc structured evaluation including socio-demographic and clinical data, symptom burden, functional and cognitive status, psychosocial problems, and existential-ethic dilemmas was recorded systematically. According to this multidimensional evaluation, investigator classified patients as high, medium, or low palliative complexity, associated to need of basic or specialized PC. Logistic regression was used to identify the variables influencing determination of level of PC complexity and explore predictive models. We included 324 patients; 41% were classified as having high PC complexity and 42.9% as medium, both levels being associated with specialized PC. Variables influencing determination of PC complexity were as follows: high symptom burden (OR 3.19 95%CI: 1.72-6.17), difficult pain (OR 2.81 95%CI:1.64-4.9), functional status (OR 0.99 95%CI:0.98-0.9), and social-ethical existential risk factors (OR 3.11 95%CI:1.73-5.77). Logistic analysis of variables allowed construct a complexity model and structured scales (PALCOM 1 and 2) with high predictive value (AUC ROC 76%). This study provides a new model and tools to assess complexity in palliative care, which may be very useful to manage referral to specialized PC services, and agree intensity of their intervention in a model of early-shared care integrated in oncology.
On Using Meta-Modeling and Multi-Modeling to Address Complex Problems
ERIC Educational Resources Information Center
Abu Jbara, Ahmed
2013-01-01
Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…
Bursting Transition Dynamics Within the Pre-Bötzinger Complex
NASA Astrophysics Data System (ADS)
Duan, Lixia; Chen, Xi; Tang, Xuhui; Su, Jianzhong
The pre-Bötzinger complex of the mammalian brain stem plays a crucial role in the respiratory rhythms generation. Neurons within the pre-Bötzinger complex have been found experimentally to yield different firing activities. In this paper, we study the spiking and bursting activities related to the respiratory rhythms in the pre-Bötzinger complex based on a mathematical model proposed by Butera. Using the one-dimensional first recurrence map induced by dynamics, we investigate the different bursting patterns and their transition of the pre-Bötzinger complex neurons based on the Butera model, after we derived a one-dimensional map from the dynamical characters of the differential equations, and we obtained conditions for the transition of different bursting patterns. These analytical results were verified through numerical simulations. We conclude that the one-dimensional map contains similar rhythmic patterns as the Butera model and can be used as a simpler modeling tool to study fast-slow models like pre-Bötzinger complex neural circuit.
Template-based structure modeling of protein-protein interactions
Szilagyi, Andras; Zhang, Yang
2014-01-01
The structure of protein-protein complexes can be constructed by using the known structure of other protein complexes as a template. The complex structure templates are generally detected either by homology-based sequence alignments or, given the structure of monomer components, by structure-based comparisons. Critical improvements have been made in recent years by utilizing interface recognition and by recombining monomer and complex template libraries. Encouraging progress has also been witnessed in genome-wide applications of template-based modeling, with modeling accuracy comparable to high-throughput experimental data. Nevertheless, bottlenecks exist due to the incompleteness of the proteinprotein complex structure library and the lack of methods for distant homologous template identification and full-length complex structure refinement. PMID:24721449
NASA Astrophysics Data System (ADS)
Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise
2017-11-01
The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.
Laghari, Samreen; Niazi, Muaz A
2016-01-01
Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.
Modeling and complexity of stochastic interacting Lévy type financial price dynamics
NASA Astrophysics Data System (ADS)
Wang, Yiduan; Zheng, Shenzhou; Zhang, Wei; Wang, Jun; Wang, Guochao
2018-06-01
In attempt to reproduce and investigate nonlinear dynamics of security markets, a novel nonlinear random interacting price dynamics, which is considered as a Lévy type process, is developed and investigated by the combination of lattice oriented percolation and Potts dynamics, which concerns with the instinctive random fluctuation and the fluctuation caused by the spread of the investors' trading attitudes, respectively. To better understand the fluctuation complexity properties of the proposed model, the complexity analyses of random logarithmic price return and corresponding volatility series are preformed, including power-law distribution, Lempel-Ziv complexity and fractional sample entropy. In order to verify the rationality of the proposed model, the corresponding studies of actual security market datasets are also implemented for comparison. The empirical results reveal that this financial price model can reproduce some important complexity features of actual security markets to some extent. The complexity of returns decreases with the increase of parameters γ1 and β respectively, furthermore, the volatility series exhibit lower complexity than the return series
Regulation of the protein-conducting channel by a bound ribosome
Gumbart, James; Trabuco, Leonardo G.; Schreiner, Eduard; Villa, Elizabeth; Schulten, Klaus
2009-01-01
Summary During protein synthesis, it is often necessary for the ribosome to form a complex with a membrane-bound channel, the SecY/Sec61 complex, in order to translocate nascent proteins across a cellular membrane. Structural data on the ribosome-channel complex are currently limited to low-resolution cryo-electron microscopy maps, including one showing a bacterial ribosome bound to a monomeric SecY complex. Using that map along with available atomic-level models of the ribosome and SecY, we have determined, through molecular dynamics flexible fitting (MDFF), an atomic-resolution model of the ribosome-channel complex. We characterized computationally the sites of ribosome-SecY interaction within the complex and determined the effect of ribosome binding on the SecY channel. We also constructed a model of a ribosome in complex with a SecY dimer by adding a second copy of SecY to the MDFF-derived model. The study involved 2.7-million-atom simulations over altogether nearly 50 ns. PMID:19913480
NASA Astrophysics Data System (ADS)
Jackson-Blake, L. A.; Sample, J. E.; Wade, A. J.; Helliwell, R. C.; Skeffington, R. A.
2017-07-01
Catchment-scale water quality models are increasingly popular tools for exploring the potential effects of land management, land use change and climate change on water quality. However, the dynamic, catchment-scale nutrient models in common usage are complex, with many uncertain parameters requiring calibration, limiting their usability and robustness. A key question is whether this complexity is justified. To explore this, we developed a parsimonious phosphorus model, SimplyP, incorporating a rainfall-runoff model and a biogeochemical model able to simulate daily streamflow, suspended sediment, and particulate and dissolved phosphorus dynamics. The model's complexity was compared to one popular nutrient model, INCA-P, and the performance of the two models was compared in a small rural catchment in northeast Scotland. For three land use classes, less than six SimplyP parameters must be determined through calibration, the rest may be based on measurements, while INCA-P has around 40 unmeasurable parameters. Despite substantially simpler process-representation, SimplyP performed comparably to INCA-P in both calibration and validation and produced similar long-term projections in response to changes in land management. Results support the hypothesis that INCA-P is overly complex for the study catchment. We hope our findings will help prompt wider model comparison exercises, as well as debate among the water quality modeling community as to whether today's models are fit for purpose. Simpler models such as SimplyP have the potential to be useful management and research tools, building blocks for future model development (prototype code is freely available), or benchmarks against which more complex models could be evaluated.
2.5D complex resistivity modeling and inversion using unstructured grids
NASA Astrophysics Data System (ADS)
Xu, Kaijun; Sun, Jie
2016-04-01
The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are segmented with fine grids and the background zones are segmented with big grid, the method can reduce the grid amounts of inversion, it is very helpful to improve the computational efficiency. The inversion results verify the validity and stability of conjugate gradient inversion algorithm. The results of theoretical calculation indicate that the modeling and inversion of 2.5D complex resistivity using unstructured grids are feasible. Using unstructured grids can improve the accuracy of modeling, but the large number of grids inversion is extremely time-consuming, so the parallel computation for the inversion is necessary. Acknowledgments: We thank to the support of the National Natural Science Foundation of China(41304094).
NASA Astrophysics Data System (ADS)
Li, Chunguang; Maini, Philip K.
2005-10-01
The Penna bit-string model successfully encompasses many phenomena of population evolution, including inheritance, mutation, evolution, and aging. If we consider social interactions among individuals in the Penna model, the population will form a complex network. In this paper, we first modify the Verhulst factor to control only the birth rate, and introduce activity-based preferential reproduction of offspring in the Penna model. The social interactions among individuals are generated by both inheritance and activity-based preferential increase. Then we study the properties of the complex network generated by the modified Penna model. We find that the resulting complex network has a small-world effect and the assortative mixing property.
ERIC Educational Resources Information Center
Wu, Jiun-Yu; Kwok, Oi-man
2012-01-01
Both ad-hoc robust sandwich standard error estimators (design-based approach) and multilevel analysis (model-based approach) are commonly used for analyzing complex survey data with nonindependent observations. Although these 2 approaches perform equally well on analyzing complex survey data with equal between- and within-level model structures…
ERIC Educational Resources Information Center
Northrup, Jessie Bolz
2017-01-01
The present article proposes a new developmental model of how young infants adapt and respond to complex contingencies in their environment, and how this influences development. The model proposes that typically developing infants adjust to an increasingly complex environment in ways that make it easier for them to allocate limited attentional…
NASA Astrophysics Data System (ADS)
Bezruchko, Konstantin; Davidov, Albert
2009-01-01
In the given article scientific and technical complex for modeling, researching and testing of rocket-space vehicles' power installations which was created in Power Source Laboratory of National Aerospace University "KhAI" is described. This scientific and technical complex gives the opportunity to replace the full-sized tests on model tests and to reduce financial and temporary inputs at modeling, researching and testing of rocket-space vehicles' power installations. Using the given complex it is possible to solve the problems of designing and researching of rocket-space vehicles' power installations efficiently, and also to provide experimental researches of physical processes and tests of solar and chemical batteries of rocket-space complexes and space vehicles. Scientific and technical complex also allows providing accelerated tests, diagnostics, life-time control and restoring of chemical accumulators for rocket-space vehicles' power supply systems.
Food-web complexity emerging from ecological dynamics on adaptive networks.
Garcia-Domingo, Josep L; Saldaña, Joan
2007-08-21
Food webs are complex networks describing trophic interactions in ecological communities. Since Robert May's seminal work on random structured food webs, the complexity-stability debate is a central issue in ecology: does network complexity increase or decrease food-web persistence? A multi-species predator-prey model incorporating adaptive predation shows that the action of ecological dynamics on the topology of a food web (whose initial configuration is generated either by the cascade model or by the niche model) render, when a significant fraction of adaptive predators is present, similar hyperbolic complexity-persistence relationships as those observed in empirical food webs. It is also shown that the apparent positive relation between complexity and persistence in food webs generated under the cascade model, which has been pointed out in previous papers, disappears when the final connection is used instead of the initial one to explain species persistence.
NASA Astrophysics Data System (ADS)
Wang, Guanghui; Wang, Yufei; Liu, Yijun; Chi, Yuxue
2018-05-01
As the transmission of public opinion on the Internet in the “We the Media” era tends to be supraterritorial, concealed and complex, the traditional “point-to-surface” transmission of information has been transformed into “point-to-point” reciprocal transmission. A foundation for studies of the evolution of public opinion and its transmission on the Internet in the “We the Media” era can be laid by converting the massive amounts of fragmented information on public opinion that exists on “We the Media” platforms into structurally complex networks of information. This paper describes studies of structurally complex network-based modeling of public opinion on the Internet in the “We the Media” era from the perspective of the development and evolution of complex networks. The progress that has been made in research projects relevant to the structural modeling of public opinion on the Internet is comprehensively summarized. The review considers aspects such as regular grid-based modeling of the rules that describe the propagation of public opinion on the Internet in the “We the Media” era, social network modeling, dynamic network modeling, and supernetwork modeling. Moreover, an outlook for future studies that address complex network-based modeling of public opinion on the Internet is put forward as a summary from the perspective of modeling conducted using the techniques mentioned above.
Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.
Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H
2018-01-01
To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.
2015-07-14
AFRL-OSR-VA-TR-2015-0202 Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex...Computational Modeling of Team Problem Solving for Decision Making Under Complex and Dynamic Conditions 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1...functioning as they solve complex problems, and propose the means to improve the performance of teams, under changing or adversarial conditions. By
USDA-ARS?s Scientific Manuscript database
Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...
Research on application of intelligent computation based LUCC model in urbanization process
NASA Astrophysics Data System (ADS)
Chen, Zemin
2007-06-01
Global change study is an interdisciplinary and comprehensive research activity with international cooperation, arising in 1980s, with the largest scopes. The interaction between land use and cover change, as a research field with the crossing of natural science and social science, has become one of core subjects of global change study as well as the front edge and hot point of it. It is necessary to develop research on land use and cover change in urbanization process and build an analog model of urbanization to carry out description, simulation and analysis on dynamic behaviors in urban development change as well as to understand basic characteristics and rules of urbanization process. This has positive practical and theoretical significance for formulating urban and regional sustainable development strategy. The effect of urbanization on land use and cover change is mainly embodied in the change of quantity structure and space structure of urban space, and LUCC model in urbanization process has been an important research subject of urban geography and urban planning. In this paper, based upon previous research achievements, the writer systematically analyzes the research on land use/cover change in urbanization process with the theories of complexity science research and intelligent computation; builds a model for simulating and forecasting dynamic evolution of urban land use and cover change, on the basis of cellular automation model of complexity science research method and multi-agent theory; expands Markov model, traditional CA model and Agent model, introduces complexity science research theory and intelligent computation theory into LUCC research model to build intelligent computation-based LUCC model for analog research on land use and cover change in urbanization research, and performs case research. The concrete contents are as follows: 1. Complexity of LUCC research in urbanization process. Analyze urbanization process in combination with the contents of complexity science research and the conception of complexity feature to reveal the complexity features of LUCC research in urbanization process. Urban space system is a complex economic and cultural phenomenon as well as a social process, is the comprehensive characterization of urban society, economy and culture, and is a complex space system formed by society, economy and nature. It has dissipative structure characteristics, such as opening, dynamics, self-organization, non-balance etc. Traditional model cannot simulate these social, economic and natural driving forces of LUCC including main feedback relation from LUCC to driving force. 2. Establishment of Markov extended model of LUCC analog research in urbanization process. Firstly, use traditional LUCC research model to compute change speed of regional land use through calculating dynamic degree, exploitation degree and consumption degree of land use; use the theory of fuzzy set to rewrite the traditional Markov model, establish structure transfer matrix of land use, forecast and analyze dynamic change and development trend of land use, and present noticeable problems and corresponding measures in urbanization process according to research results. 3. Application of intelligent computation research and complexity science research method in LUCC analog model in urbanization process. On the basis of detailed elaboration of the theory and the model of LUCC research in urbanization process, analyze the problems of existing model used in LUCC research (namely, difficult to resolve many complexity phenomena in complex urban space system), discuss possible structure realization forms of LUCC analog research in combination with the theories of intelligent computation and complexity science research. Perform application analysis on BP artificial neural network and genetic algorithms of intelligent computation and CA model and MAS technology of complexity science research, discuss their theoretical origins and their own characteristics in detail, elaborate the feasibility of them in LUCC analog research, and bring forward improvement methods and measures on existing problems of this kind of model. 4. Establishment of LUCC analog model in urbanization process based on theories of intelligent computation and complexity science. Based on the research on abovementioned BP artificial neural network, genetic algorithms, CA model and multi-agent technology, put forward improvement methods and application assumption towards their expansion on geography, build LUCC analog model in urbanization process based on CA model and Agent model, realize the combination of learning mechanism of BP artificial neural network and fuzzy logic reasoning, express the regulation with explicit formula, and amend the initial regulation through self study; optimize network structure of LUCC analog model and methods and procedures of model parameters with genetic algorithms. In this paper, I introduce research theory and methods of complexity science into LUCC analog research and presents LUCC analog model based upon CA model and MAS theory. Meanwhile, I carry out corresponding expansion on traditional Markov model and introduce the theory of fuzzy set into data screening and parameter amendment of improved model to improve the accuracy and feasibility of Markov model in the research on land use/cover change.
A Primer for Model Selection: The Decisive Role of Model Complexity
NASA Astrophysics Data System (ADS)
Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang
2018-03-01
Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.
Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems
Timmis, Jon; Qwarnstrom, Eva E.
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414
Network model of bilateral power markets based on complex networks
NASA Astrophysics Data System (ADS)
Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li
2014-06-01
The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper model which could capture its characteristics is in urgent need. However, the model is lacking because of this market organization's complexity. As a promising approach to modeling complex systems, complex networks could provide a sound theoretical framework for developing proper simulation model. In this paper, a complex network model of the BPT market is proposed. In this model, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the model. Through simulation analysis, the feasibility and validity of the model are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournassat, C.; Tinnacher, R. M.; Grangeon, S.
The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonitemore » edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites (‘spillover’ effect).« less
Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; ...
2017-10-06
The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonitemore » edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites (‘spillover’ effect).« less
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro
2018-03-01
This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.
Sparkle model for AM1 calculation of lanthanide complexes: improved parameters for europium.
Rocha, Gerd B; Freire, Ricardo O; Da Costa, Nivan B; De Sá, Gilberto F; Simas, Alfredo M
2004-04-05
In the present work, we sought to improve our sparkle model for the calculation of lanthanide complexes, SMLC,in various ways: (i) inclusion of the europium atomic mass, (ii) reparametrization of the model within AM1 from a new response function including all distances of the coordination polyhedron for tris(acetylacetonate)(1,10-phenanthroline) europium(III), (iii) implementation of the model in the software package MOPAC93r2, and (iv) inclusion of spherical Gaussian functions in the expression which computes the core-core repulsion energy. The parametrization results indicate that SMLC II is superior to the previous version of the model because Gaussian functions proved essential if one requires a better description of the geometries of the complexes. In order to validate our parametrization, we carried out calculations on 96 europium(III) complexes, selected from Cambridge Structural Database 2003, and compared our predicted ground state geometries with the experimental ones. Our results show that this new parametrization of the SMLC model, with the inclusion of spherical Gaussian functions in the core-core repulsion energy, is better capable of predicting the Eu-ligand distances than the previous version. The unsigned mean error for all interatomic distances Eu-L, in all 96 complexes, which, for the original SMLC is 0.3564 A, is lowered to 0.1993 A when the model was parametrized with the inclusion of two Gaussian functions. Our results also indicate that this model is more applicable to europium complexes with beta-diketone ligands. As such, we conclude that this improved model can be considered a powerful tool for the study of lanthanide complexes and their applications, such as the modeling of light conversion molecular devices.
The practical use of simplicity in developing ground water models
Hill, M.C.
2006-01-01
The advantages of starting with simple models and building complexity slowly can be significant in the development of ground water models. In many circumstances, simpler models are characterized by fewer defined parameters and shorter execution times. In this work, the number of parameters is used as the primary measure of simplicity and complexity; the advantages of shorter execution times also are considered. The ideas are presented in the context of constructing ground water models but are applicable to many fields. Simplicity first is put in perspective as part of the entire modeling process using 14 guidelines for effective model calibration. It is noted that neither very simple nor very complex models generally produce the most accurate predictions and that determining the appropriate level of complexity is an ill-defined process. It is suggested that a thorough evaluation of observation errors is essential to model development. Finally, specific ways are discussed to design useful ground water models that have fewer parameters and shorter execution times.
Complexity and demographic explanations of cumulative culture.
Querbes, Adrien; Vaesen, Krist; Houkes, Wybo
2014-01-01
Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing--while favoured by increasing--population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change.
Statistical Analysis of Complexity Generators for Cost Estimation
NASA Technical Reports Server (NTRS)
Rowell, Ginger Holmes
1999-01-01
Predicting the cost of cutting edge new technologies involved with spacecraft hardware can be quite complicated. A new feature of the NASA Air Force Cost Model (NAFCOM), called the Complexity Generator, is being developed to model the complexity factors that drive the cost of space hardware. This parametric approach is also designed to account for the differences in cost, based on factors that are unique to each system and subsystem. The cost driver categories included in this model are weight, inheritance from previous missions, technical complexity, and management factors. This paper explains the Complexity Generator framework, the statistical methods used to select the best model within this framework, and the procedures used to find the region of predictability and the prediction intervals for the cost of a mission.
NASA Astrophysics Data System (ADS)
Pajusalu, Mihkel; Kunz, Ralf; Rätsep, Margus; Timpmann, Kõu; Köhler, Jürgen; Freiberg, Arvi
2015-11-01
Bacterial light-harvesting pigment-protein complexes are very efficient at converting photons into excitons and transferring them to reaction centers, where the energy is stored in a chemical form. Optical properties of the complexes are known to change significantly in time and also vary from one complex to another; therefore, a detailed understanding of the variations on the level of single complexes and how they accumulate into effects that can be seen on the macroscopic scale is required. While experimental and theoretical methods exist to study the spectral properties of light-harvesting complexes on both individual complex and bulk ensemble levels, they have been developed largely independently of each other. To fill this gap, we simultaneously analyze experimental low-temperature single-complex and bulk ensemble optical spectra of the light-harvesting complex-2 (LH2) chromoproteins from the photosynthetic bacterium Rhodopseudomonas acidophila in order to find a unique theoretical model consistent with both experimental situations. The model, which satisfies most of the observations, combines strong exciton-phonon coupling with significant disorder, characteristic of the proteins. We establish a detailed disorder model that, in addition to containing a C2-symmetrical modulation of the site energies, distinguishes between static intercomplex and slow conformational intracomplex disorders. The model evaluations also verify that, despite best efforts, the single-LH2-complex measurements performed so far may be biased toward complexes with higher Huang-Rhys factors.
Pajusalu, Mihkel; Kunz, Ralf; Rätsep, Margus; Timpmann, Kõu; Köhler, Jürgen; Freiberg, Arvi
2015-01-01
Bacterial light-harvesting pigment-protein complexes are very efficient at converting photons into excitons and transferring them to reaction centers, where the energy is stored in a chemical form. Optical properties of the complexes are known to change significantly in time and also vary from one complex to another; therefore, a detailed understanding of the variations on the level of single complexes and how they accumulate into effects that can be seen on the macroscopic scale is required. While experimental and theoretical methods exist to study the spectral properties of light-harvesting complexes on both individual complex and bulk ensemble levels, they have been developed largely independently of each other. To fill this gap, we simultaneously analyze experimental low-temperature single-complex and bulk ensemble optical spectra of the light-harvesting complex-2 (LH2) chromoproteins from the photosynthetic bacterium Rhodopseudomonas acidophila in order to find a unique theoretical model consistent with both experimental situations. The model, which satisfies most of the observations, combines strong exciton-phonon coupling with significant disorder, characteristic of the proteins. We establish a detailed disorder model that, in addition to containing a C_{2}-symmetrical modulation of the site energies, distinguishes between static intercomplex and slow conformational intracomplex disorders. The model evaluations also verify that, despite best efforts, the single-LH2-complex measurements performed so far may be biased toward complexes with higher Huang-Rhys factors.
Cui, Yiqian; Shi, Junyou; Wang, Zili
2015-11-01
Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Befrui, Bizhan A.
1995-01-01
This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.
Variable Complexity Optimization of Composite Structures
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
2002-01-01
The use of several levels of modeling in design has been dubbed variable complexity modeling. The work under the grant focused on developing variable complexity modeling strategies with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and design against uncertainty using response surface techniques.
Bim Automation: Advanced Modeling Generative Process for Complex Structures
NASA Astrophysics Data System (ADS)
Banfi, F.; Fai, S.; Brumana, R.
2017-08-01
The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.
Balancing model complexity and measurements in hydrology
NASA Astrophysics Data System (ADS)
Van De Giesen, N.; Schoups, G.; Weijs, S. V.
2012-12-01
The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model outcomes, thereby preventing the most obvious results of over-fitting. Thirdly, dependence within and between time series poses an additional analytical problem. Finally, there are arguments to be made that the often discussed "equifinality" in hydrological models is simply a different manifestation of the lack of complexity control. In turn, this points toward a general idea, which is actually quite popular in sciences other than hydrology, that additional data gathering is a good way to increase the information content of our descriptions of hydrological reality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Almeida, J.C.M.
1990-01-01
A detailed analysis is made of two stereochemical models commonly used in lanthanide and actinide coordination and organometallic chemistry. Li Xing-fu's Cone Packing Model and K. N. Raymond's Ionic Model. Corrections are introduced in the first model as a basis to discuss the stability and structure of known complexes. A Steric Coordination Number is defined for the second model, based on the solid angle to correlate metal-ligand distances in complexes with the ionic radii of the elements and to assign effective radii to the ligands, related to the donating power of the coordinating atoms. As an application of the models,more » the syntheses and characterizations of thorium(IV) complexes with polypyrazolylborates. (HBPz3) {sup -1} and (HB(3.5-Me2Pz)3) {sup -1}, and alkoxides, aryloxides, carboxylates, amides, thiolates, alkyls and cyclopentadienyl are described and their stabilities discussed. The geometries of the complexes in the solid and in solution are discussed and a mechanism is proposed to explain the fluxionality in solution of the complexes with (HBPz3) {sup -1}.« less
Routine Discovery of Complex Genetic Models using Genetic Algorithms
Moore, Jason H.; Hahn, Lance W.; Ritchie, Marylyn D.; Thornton, Tricia A.; White, Bill C.
2010-01-01
Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes. PMID:20948983
Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.
Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn
2015-10-01
Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of the uncertainties concerning the biosphere on very long timescales, stylised biosphere models are shown to provide a useful point of reference in themselves and remain a valuable tool for nuclear waste disposal licencing procedures. Copyright © 2015 Elsevier Ltd. All rights reserved.
Managing Complex Interoperability Solutions using Model-Driven Architecture
2011-06-01
such as Oracle or MySQL . Each data model for a specific RDBMS is a distinct PSM. Or the system may want to exchange information with other C2...reduced number of transformations, e.g., from an RDBMS physical schema to the corresponding SQL script needed to instantiate the tables in a relational...tance of models. In engineering, a model serves several purposes: 1. It presents an abstract view of a complex system or of a complex information
2016-01-01
Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235
Complexity reduction of biochemical rate expressions.
Schmidt, Henning; Madsen, Mads F; Danø, Sune; Cedersund, Gunnar
2008-03-15
The current trend in dynamical modelling of biochemical systems is to construct more and more mechanistically detailed and thus complex models. The complexity is reflected in the number of dynamic state variables and parameters, as well as in the complexity of the kinetic rate expressions. However, a greater level of complexity, or level of detail, does not necessarily imply better models, or a better understanding of the underlying processes. Data often does not contain enough information to discriminate between different model hypotheses, and such overparameterization makes it hard to establish the validity of the various parts of the model. Consequently, there is an increasing demand for model reduction methods. We present a new reduction method that reduces complex rational rate expressions, such as those often used to describe enzymatic reactions. The method is a novel term-based identifiability analysis, which is easy to use and allows for user-specified reductions of individual rate expressions in complete models. The method is one of the first methods to meet the classical engineering objective of improved parameter identifiability without losing the systems biology demand of preserved biochemical interpretation. The method has been implemented in the Systems Biology Toolbox 2 for MATLAB, which is freely available from http://www.sbtoolbox2.org. The Supplementary Material contains scripts that show how to use it by applying the method to the example models, discussed in this article.
Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.
2011-01-01
Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844
Simplified process model discovery based on role-oriented genetic mining.
Zhao, Weidong; Liu, Xi; Dai, Weihui
2014-01-01
Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
2015-10-30
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
Updating the debate on model complexity
Simmons, Craig T.; Hunt, Randall J.
2012-01-01
As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”
ADAM: analysis of discrete models of biological systems using computer algebra.
Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard
2011-07-20
Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.
Moving alcohol prevention research forward-Part I: introducing a complex systems paradigm.
Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller
2018-02-01
The drinking environment is a complex system consisting of a number of heterogeneous, evolving and interacting components, which exhibit circular causality and emergent properties. These characteristics reduce the efficacy of commonly used research approaches, which typically do not account for the underlying dynamic complexity of alcohol consumption and the interdependent nature of diverse factors influencing misuse over time. We use alcohol misuse among college students in the United States as an example for framing our argument for a complex systems paradigm. A complex systems paradigm, grounded in socio-ecological and complex systems theories and computational modeling and simulation, is introduced. Theoretical, conceptual, methodological and analytical underpinnings of this paradigm are described in the context of college drinking prevention research. The proposed complex systems paradigm can transcend limitations of traditional approaches, thereby fostering new directions in alcohol prevention research. By conceptualizing student alcohol misuse as a complex adaptive system, computational modeling and simulation methodologies and analytical techniques can be used. Moreover, use of participatory model-building approaches to generate simulation models can further increase stakeholder buy-in, understanding and policymaking. A complex systems paradigm for research into alcohol misuse can provide a holistic understanding of the underlying drinking environment and its long-term trajectory, which can elucidate high-leverage preventive interventions. © 2017 Society for the Study of Addiction.
NASA Astrophysics Data System (ADS)
Brown, Alexander; Eviston, Connor
2017-02-01
Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.
Complexity and Demographic Explanations of Cumulative Culture
Querbes, Adrien; Vaesen, Krist; Houkes, Wybo
2014-01-01
Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing— while favoured by increasing—population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change. PMID:25048625
Measuring case-mix complexity of tertiary care hospitals using DRGs.
Park, Hayoung; Shin, Youngsoo
2004-02-01
The objectives of the study were to develop a model that measures and evaluates case-mix complexity of tertiary care hospitals, and to examine the characteristics of such a model. Physician panels defined three classes of case complexity and assigned disease categories represented by Adjacent Diagnosis Related Groups (ADRGs) to one of three case complexity classes. Three types of scores, indicating proportions of inpatients in each case complexity class standardized by the proportions at the national level, were defined to measure the case-mix complexity of a hospital. Discharge information for about 10% of inpatient episodes at 85 hospitals with bed size larger than 400 and their input structure and research and education activity were used to evaluate the case-mix complexity model. Results show its power to predict hospitals with the expected functions of tertiary care hospitals, i.e. resource intensive care, expensive input structure, and high levels of research and education activities.
Theoretical Modeling and Electromagnetic Response of Complex Metamaterials
2017-03-06
AFRL-AFOSR-VA-TR-2017-0042 Theoretical Modeling and Electromagnetic Response of Complex Metamaterials Andrea Alu UNIVERSITY OF TEXAS AT AUSTIN Final...Nov 2016 4. TITLE AND SUBTITLE Theoretical Modeling and Electromagnetic Response of Complex Metamaterials 5a. CONTRACT NUMBER 5b. GRANT NUMBER...based on parity-time symmetric metasurfaces, and various advances in electromagnetic and acoustic theory and applications. Our findings have opened
NASA Astrophysics Data System (ADS)
Salman Shahid, Syed; Bikson, Marom; Salman, Humaira; Wen, Peng; Ahfock, Tony
2014-06-01
Objectives. Computational methods are increasingly used to optimize transcranial direct current stimulation (tDCS) dose strategies and yet complexities of existing approaches limit their clinical access. Since predictive modelling indicates the relevance of subject/pathology based data and hence the need for subject specific modelling, the incremental clinical value of increasingly complex modelling methods must be balanced against the computational and clinical time and costs. For example, the incorporation of multiple tissue layers and measured diffusion tensor (DTI) based conductivity estimates increase model precision but at the cost of clinical and computational resources. Costs related to such complexities aggregate when considering individual optimization and the myriad of potential montages. Here, rather than considering if additional details change current-flow prediction, we consider when added complexities influence clinical decisions. Approach. Towards developing quantitative and qualitative metrics of value/cost associated with computational model complexity, we considered field distributions generated by two 4 × 1 high-definition montages (m1 = 4 × 1 HD montage with anode at C3 and m2 = 4 × 1 HD montage with anode at C1) and a single conventional (m3 = C3-Fp2) tDCS electrode montage. We evaluated statistical methods, including residual error (RE) and relative difference measure (RDM), to consider the clinical impact and utility of increased complexities, namely the influence of skull, muscle and brain anisotropic conductivities in a volume conductor model. Main results. Anisotropy modulated current-flow in a montage and region dependent manner. However, significant statistical changes, produced within montage by anisotropy, did not change qualitative peak and topographic comparisons across montages. Thus for the examples analysed, clinical decision on which dose to select would not be altered by the omission of anisotropic brain conductivity. Significance. Results illustrate the need to rationally balance the role of model complexity, such as anisotropy in detailed current flow analysis versus value in clinical dose design. However, when extending our analysis to include axonal polarization, the results provide presumably clinically meaningful information. Hence the importance of model complexity may be more relevant with cellular level predictions of neuromodulation.
Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models
NASA Astrophysics Data System (ADS)
Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.
2007-01-01
Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.
Evidence for complex contagion models of social contagion from observational data
Sprague, Daniel A.
2017-01-01
Social influence can lead to behavioural ‘fads’ that are briefly popular and quickly die out. Various models have been proposed for these phenomena, but empirical evidence of their accuracy as real-world predictive tools has so far been absent. Here we find that a ‘complex contagion’ model accurately describes the spread of behaviours driven by online sharing. We found that standard, ‘simple’, contagion often fails to capture both the rapid spread and the long tails of popularity seen in real fads, where our complex contagion model succeeds. Complex contagion also has predictive power: it successfully predicted the peak time and duration of the ALS Icebucket Challenge. The fast spread and longer duration of fads driven by complex contagion has important implications for activities such as publicity campaigns and charity drives. PMID:28686719
Blinov, Michael L.; Moraru, Ion I.
2011-01-01
Multi-state molecules and multi-component complexes are commonly involved in cellular signaling. Accounting for molecules that have multiple potential states, such as a protein that may be phosphorylated on multiple residues, and molecules that combine to form heterogeneous complexes located among multiple compartments, generates an effect of combinatorial complexity. Models involving relatively few signaling molecules can include thousands of distinct chemical species. Several software tools (StochSim, BioNetGen) are already available to deal with combinatorial complexity. Such tools need information standards if models are to be shared, jointly evaluated and developed. Here we discuss XML conventions that can be adopted for modeling biochemical reaction networks described by user-specified reaction rules. These could form a basis for possible future extensions of the Systems Biology Markup Language (SBML). PMID:21464833
FLAME: A platform for high performance computing of complex systems, applied for three case studies
Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...
2011-01-01
FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.
A Complex Systems Model Approach to Quantified Mineral Resource Appraisal
Gettings, M.E.; Bultman, M.W.; Fisher, F.S.
2004-01-01
For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.
Turbulence spectra in the noise source regions of the flow around complex surfaces
NASA Technical Reports Server (NTRS)
Olsen, W. A.; Boldman, D. R.
1983-01-01
The complex turbulent flow around three complex surfaces was measured in detail with a hot wire. The measured data include extensive spatial surveys of the mean velocity and turbulence intensity and measurements of the turbulence spectra and scale length at many locations. The publication of the turbulence data is completed by reporting a summary of the turbulence spectra that were measured within the noise source locations of the flow. The results suggest some useful simplifications in modeling the very complex turbulent flow around complex surfaces for aeroacoustic predictive models. The turbulence spectra also show that noise data from scale models of moderate size can be accurately scaled up to full size.
Effect of shoulder model complexity in upper-body kinematics analysis of the golf swing.
Bourgain, M; Hybois, S; Thoreux, P; Rouillon, O; Rouch, P; Sauret, C
2018-06-25
The golf swing is a complex full body movement during which the spine and shoulders are highly involved. In order to determine shoulder kinematics during this movement, multibody kinematics optimization (MKO) can be recommended to limit the effect of the soft tissue artifact and to avoid joint dislocations or bone penetration in reconstructed kinematics. Classically, in golf biomechanics research, the shoulder is represented by a 3 degrees-of-freedom model representing the glenohumeral joint. More complex and physiological models are already provided in the scientific literature. Particularly, the model used in this study was a full body model and also described motions of clavicles and scapulae. This study aimed at quantifying the effect of utilizing a more complex and physiological shoulder model when studying the golf swing. Results obtained on 20 golfers showed that a more complex and physiologically-accurate model can more efficiently track experimental markers, which resulted in differences in joint kinematics. Hence, the model with 3 degrees-of-freedom between the humerus and the thorax may be inadequate when combined with MKO and a more physiological model would be beneficial. Finally, results would also be improved through a subject-specific approach for the determination of the segment lengths. Copyright © 2018 Elsevier Ltd. All rights reserved.
Abstraction and model evaluation in category learning.
Vanpaemel, Wolf; Storms, Gert
2010-05-01
Thirty previously published data sets, from seminal category learning tasks, are reanalyzed using the varying abstraction model (VAM). Unlike a prototype-versus-exemplar analysis, which focuses on extreme levels of abstraction only, a VAM analysis also considers the possibility of partial abstraction. Whereas most data sets support no abstraction when only the extreme possibilities are considered, we show that evidence for abstraction can be provided using the broader view on abstraction provided by the VAM. The present results generalize earlier demonstrations of partial abstraction (Vanpaemel & Storms, 2008), in which only a small number of data sets was analyzed. Following the dominant modus operandi in category learning research, Vanpaemel and Storms evaluated the models on their best fit, a practice known to ignore the complexity of the models under consideration. In the present study, in contrast, model evaluation not only relies on the maximal likelihood, but also on the marginal likelihood, which is sensitive to model complexity. Finally, using a large recovery study, it is demonstrated that, across the 30 data sets, complexity differences between the models in the VAM family are small. This indicates that a (computationally challenging) complexity-sensitive model evaluation method is uncalled for, and that the use of a (computationally straightforward) complexity-insensitive model evaluation method is justified.
Li, Zhenping; Zhang, Xiang-Sun; Wang, Rui-Sheng; Liu, Hongwei; Zhang, Shihua
2013-01-01
Identification of communities in complex networks is an important topic and issue in many fields such as sociology, biology, and computer science. Communities are often defined as groups of related nodes or links that correspond to functional subunits in the corresponding complex systems. While most conventional approaches have focused on discovering communities of nodes, some recent studies start partitioning links to find overlapping communities straightforwardly. In this paper, we propose a new quantity function for link community identification in complex networks. Based on this quantity function we formulate the link community partition problem into an integer programming model which allows us to partition a complex network into overlapping communities. We further propose a genetic algorithm for link community detection which can partition a network into overlapping communities without knowing the number of communities. We test our model and algorithm on both artificial networks and real-world networks. The results demonstrate that the model and algorithm are efficient in detecting overlapping community structure in complex networks. PMID:24386268
The Effect of Sensor Performance on Safe Minefield Transit
2002-12-01
the results of the simpler model are not good approximations of the results obtained with the more complex model, suggesting that even greater complexity in maneuver modeling may be desirable for some purposes.
Some Approaches to Modeling Complex Information Systems.
ERIC Educational Resources Information Center
Rao, V. Venkata; Zunde, Pranas
1982-01-01
Brief discussion of state-of-the-art of modeling complex information systems distinguishes between macrolevel and microlevel modeling of such systems. Network layout and hierarchical system models, simulation, information acquisition and dissemination, databases and information storage, and operating systems are described and assessed. Thirty-four…
Development of structural model of adaptive training complex in ergatic systems for professional use
NASA Astrophysics Data System (ADS)
Obukhov, A. D.; Dedov, D. L.; Arkhipov, A. E.
2018-03-01
The article considers the structural model of the adaptive training complex (ATC), which reflects the interrelations between the hardware, software and mathematical model of ATC and describes the processes in this subject area. The description of the main components of software and hardware complex, their interaction and functioning within the common system are given. Also the article scrutinizers a brief description of mathematical models of personnel activity, a technical system and influences, the interactions of which formalize the regularities of ATC functioning. The studies of main objects of training complexes and connections between them will make it possible to realize practical implementation of ATC in ergatic systems for professional use.
Assessment of wear dependence parameters in complex model of cutting tool wear
NASA Astrophysics Data System (ADS)
Antsev, A. V.; Pasko, N. I.; Antseva, N. V.
2018-03-01
This paper addresses wear dependence of the generic efficient life period of cutting tools taken as an aggregate of the law of tool wear rate distribution and dependence of parameters of this law's on the cutting mode, factoring in the random factor as exemplified by the complex model of wear. The complex model of wear takes into account the variance of cutting properties within one batch of tools, variance in machinability within one batch of workpieces, and the stochastic nature of the wear process itself. A technique of assessment of wear dependence parameters in a complex model of cutting tool wear is provided. The technique is supported by a numerical example.
An egalitarian network model for the emergence of simple and complex cells in visual cortex
Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert
2004-01-01
We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891
Comparing an annual and daily time-step model for predicting field-scale P loss
USDA-ARS?s Scientific Manuscript database
Several models with varying degrees of complexity are available for describing P movement through the landscape. The complexity of these models is dependent on the amount of data required by the model, the number of model parameters needed to be estimated, the theoretical rigor of the governing equa...
A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...
Modeling complexity in engineered infrastructure system: Water distribution network as an example
NASA Astrophysics Data System (ADS)
Zeng, Fang; Li, Xiang; Li, Ke
2017-02-01
The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.
Building a pseudo-atomic model of the anaphase-promoting complex.
Kulkarni, Kiran; Zhang, Ziguo; Chang, Leifu; Yang, Jing; da Fonseca, Paula C A; Barford, David
2013-11-01
The anaphase-promoting complex (APC/C) is a large E3 ubiquitin ligase that regulates progression through specific stages of the cell cycle by coordinating the ubiquitin-dependent degradation of cell-cycle regulatory proteins. Depending on the species, the active form of the APC/C consists of 14-15 different proteins that assemble into a 20-subunit complex with a mass of approximately 1.3 MDa. A hybrid approach of single-particle electron microscopy and protein crystallography of individual APC/C subunits has been applied to generate pseudo-atomic models of various functional states of the complex. Three approaches for assigning regions of the EM-derived APC/C density map to specific APC/C subunits are described. This information was used to dock atomic models of APC/C subunits, determined either by protein crystallography or homology modelling, to specific regions of the APC/C EM map, allowing the generation of a pseudo-atomic model corresponding to 80% of the entire complex.
A musculoskeletal model of the elbow joint complex
NASA Technical Reports Server (NTRS)
Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.
1993-01-01
This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.
Geometric modeling of subcellular structures, organelles, and multiprotein complexes
Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei
2013-01-01
SUMMARY Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multi-protein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes. PMID:23212797
Emulator-assisted data assimilation in complex models
NASA Astrophysics Data System (ADS)
Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas
2016-09-01
Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.
NASA Astrophysics Data System (ADS)
Svoray, Tal; Assouline, Shmuel; Katul, Gabriel
2015-11-01
Current literature provides large number of publications about ecohydrological processes and their effect on the biota in drylands. Given the limited laboratory and field experiments in such systems, many of these publications are based on mathematical models of varying complexity. The underlying implicit assumption is that the data set used to evaluate these models covers the parameter space of conditions that characterize drylands and that the models represent the actual processes with acceptable certainty. However, a question raised is to what extent these mathematical models are valid when confronted with observed ecosystem complexity? This Introduction reviews the 16 papers that comprise the Special Section on Eco-hydrology of Semiarid Environments: Confronting Mathematical Models with Ecosystem Complexity. The subjects studied in these papers include rainfall regime, infiltration and preferential flow, evaporation and evapotranspiration, annual net primary production, dispersal and invasion, and vegetation greening. The findings in the papers published in this Special Section show that innovative mathematical modeling approaches can represent actual field measurements. Hence, there are strong grounds for suggesting that mathematical models can contribute to greater understanding of ecosystem complexity through characterization of space-time dynamics of biomass and water storage as well as their multiscale interactions. However, the generality of the models and their low-dimensional representation of many processes may also be a "curse" that results in failures when particulars of an ecosystem are required. It is envisaged that the search for a unifying "general" model, while seductive, may remain elusive in the foreseeable future. It is for this reason that improving the merger between experiments and models of various degrees of complexity continues to shape the future research agenda.
Mathematical Models to Determine Stable Behavior of Complex Systems
NASA Astrophysics Data System (ADS)
Sumin, V. I.; Dushkin, A. V.; Smolentseva, T. E.
2018-05-01
The paper analyzes a possibility to predict functioning of a complex dynamic system with a significant amount of circulating information and a large number of random factors impacting its functioning. Functioning of the complex dynamic system is described as a chaotic state, self-organized criticality and bifurcation. This problem may be resolved by modeling such systems as dynamic ones, without applying stochastic models and taking into account strange attractors.
Tracer transport in soils and shallow groundwater: model abstraction with modern tools
USDA-ARS?s Scientific Manuscript database
Vadose zone controls contaminant transport from the surface to groundwater, and modeling transport in vadose zone has become a burgeoning field. Exceedingly complex models of subsurface contaminant transport are often inefficient. Model abstraction is the methodology for reducing the complexity of a...
Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation
NASA Astrophysics Data System (ADS)
Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter
2015-04-01
Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.
Hou, Chang-Yu; Feng, Ling; Seleznev, Nikita; Freed, Denise E
2018-09-01
In this work, we establish an effective medium model to describe the low-frequency complex dielectric (conductivity) dispersion of dilute clay suspensions. We use previously obtained low-frequency polarization coefficients for a charged oblate spheroidal particle immersed in an electrolyte as the building block for the Maxwell Garnett mixing formula to model the dilute clay suspension. The complex conductivity phase dispersion exhibits a near-resonance peak when the clay grains have a narrow size distribution. The peak frequency is associated with the size distribution as well as the shape of clay grains and is often referred to as the characteristic frequency. In contrast, if the size of the clay grains has a broad distribution, the phase peak is broadened and can disappear into the background of the canonical phase response of the brine. To benchmark our model, the low-frequency dispersion of the complex conductivity of dilute clay suspensions is measured using a four-point impedance measurement, which can be reliably calibrated in the frequency range between 0.1 Hz and 10 kHz. By using a minimal number of fitting parameters when reliable information is available as input for the model and carefully examining the issue of potential over-fitting, we found that our model can be used to fit the measured dispersion of the complex conductivity with reasonable parameters. The good match between the modeled and experimental complex conductivity dispersion allows us to argue that our simplified model captures the essential physics for describing the low-frequency dispersion of the complex conductivity of dilute clay suspensions. Copyright © 2018 Elsevier Inc. All rights reserved.
Risk Modeling of Interdependent Complex Systems of Systems: Theory and Practice.
Haimes, Yacov Y
2018-01-01
The emergence of the complexity characterizing our systems of systems (SoS) requires a reevaluation of the way we model, assess, manage, communicate, and analyze the risk thereto. Current models for risk analysis of emergent complex SoS are insufficient because too often they rely on the same risk functions and models used for single systems. These models commonly fail to incorporate the complexity derived from the networks of interdependencies and interconnectedness (I-I) characterizing SoS. There is a need to reevaluate currently practiced risk analysis to respond to this reality by examining, and thus comprehending, what makes emergent SoS complex. The key to evaluating the risk to SoS lies in understanding the genesis of characterizing I-I of systems manifested through shared states and other essential entities within and among the systems that constitute SoS. The term "essential entities" includes shared decisions, resources, functions, policies, decisionmakers, stakeholders, organizational setups, and others. This undertaking can be accomplished by building on state-space theory, which is fundamental to systems engineering and process control. This article presents a theoretical and analytical framework for modeling the risk to SoS with two case studies performed with the MITRE Corporation and demonstrates the pivotal contributions made by shared states and other essential entities to modeling and analysis of the risk to complex SoS. A third case study highlights the multifarious representations of SoS, which require harmonizing the risk analysis process currently applied to single systems when applied to complex SoS. © 2017 Society for Risk Analysis.
Modeling of Wall-Bounded Complex Flows and Free Shear Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Zhu, Jiang; Lumley, John L.
1994-01-01
Various wall-bounded flows with complex geometries and free shear flows have been studied with a newly developed realizable Reynolds stress algebraic equation model. The model development is based on the invariant theory in continuum mechanics. This theory enables us to formulate a general constitutive relation for the Reynolds stresses. Pope was the first to introduce this kind of constitutive relation to turbulence modeling. In our study, realizability is imposed on the truncated constitutive relation to determine the coefficients so that, unlike the standard k-E eddy viscosity model, the present model will not produce negative normal stresses in any situations of rapid distortion. The calculations based on the present model have shown an encouraging success in modeling complex turbulent flows.
NASA Astrophysics Data System (ADS)
McDonald, Karlie; Mika, Sarah; Kolbe, Tamara; Abbott, Ben; Ciocca, Francesco; Marruedo, Amaia; Hannah, David; Schmidt, Christian; Fleckenstein, Jan; Karuse, Stefan
2016-04-01
Sub-surface hydrologic processes are highly dynamic, varying spatially and temporally with strong links to the geomorphology and hydrogeologic properties of an area. This spatial and temporal complexity is a critical regulator of biogeochemical and ecological processes within the interface groundwater - surface water (GW-SW) ecohydrological interface and adjacent ecosystems. Many GW-SW models have attempted to capture this spatial and temporal complexity with varying degrees of success. The incorporation of spatial and temporal complexity within GW-SW model configuration is important to investigate interactions with transient storage and subsurface geology, infiltration and recharge, and mass balance of exchange fluxes at the GW-SW ecohydrological interface. Additionally, characterising spatial and temporal complexity in GW-SW models is essential to derive predictions using realistic environmental conditions. In this paper we conduct a systematic Web of Science meta-analysis of conceptual, hydrodynamic, and reactive and heat transport models of the GW-SW ecohydrological interface since 2004 to explore how these models handled spatial and temporal complexity. The freshwater - groundwater ecohydrological interface was the most commonly represented in publications between 2004 and 2014 with 91% of papers followed by marine 6% and estuarine systems with 3% of papers. Of the GW-SW models published since 2004, the 52% have focused on hydrodynamic processes and <15% covered more than one process (e.g. heat and reactive transport). Within the hydrodynamic subset, 25% of models focused on a vertical depth of <5m. The primary scientific and technological limitations of incorporating spatial and temporal variability into GW-SW models are identified as the inclusion of woody debris, carbon sources, subsurface geological structures and bioclogging into model parameterization. The technological limitations influence the types of models applied, such as hydrostatic coupled models and fully intrinsic saturated and unsaturated models, and the assumptions or simplifications scientists apply to investigate the GW-SW ecohydrological interface. We investigated the type of modelling approaches applied across different scales (site, reach, catchment, nested catchments) and assessed the simplifications in environmental conditions and complexity that are commonly made in model configuration. Understanding the theoretical concepts that underpin these current modelling approaches is critical for scientists to develop measures to derive predictions from realistic environmental conditions at management relevant scales and establish best-practice modelling approaches for improving the scientific understanding and management of the GW-SW interface. Additionally, the assessment of current modelling approaches informs our proposed framework for the progress of GW-SW models in the future. The framework presented aims to increase future scientific, technological and management integration and the identification of research priorities to allow spatial and temporal complexity to be better incorporated into GW-SW models.
Application of surface complexation models to anion adsorption by natural materials.
Goldberg, Sabine
2014-10-01
Various chemical models of ion adsorption are presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model, are described in the present study. Characteristics common to all the surface complexation models are equilibrium constant expressions, mass and charge balances, and surface activity coefficient electrostatic potential terms. Methods for determining parameter values for surface site density, capacitances, and surface complexation constants also are discussed. Spectroscopic experimental methods of establishing ion adsorption mechanisms include vibrational spectroscopy, nuclear magnetic resonance spectroscopy, electron spin resonance spectroscopy, X-ray absorption spectroscopy, and X-ray reflectivity. Experimental determinations of point of zero charge shifts and ionic strength dependence of adsorption results and molecular modeling calculations also can be used to deduce adsorption mechanisms. Applications of the surface complexation models to heterogeneous natural materials, such as soils, using the component additivity and the generalized composite approaches are described. Emphasis is on the generalized composite approach for predicting anion adsorption by soils. Continuing research is needed to develop consistent and realistic protocols for describing ion adsorption reactions on soil minerals and soils. The availability of standardized model parameter databases for use in chemical speciation-transport models is critical. Published 2014 Wiley Periodicals Inc. on behalf of SETAC. This article is a US Government work and as such, is in the public domain in the in the United States of America.
Filho, Manoel A. M.; Dutra, José Diogo L.; Rocha, Gerd B.; Simas, Alfredo M.; Freire, Ricardo O.
2014-01-01
Complexes of dysprosium, holmium, and erbium find many applications as single-molecule magnets, as contrast agents for magnetic resonance imaging, as anti-cancer agents, in optical telecommunications, etc. Therefore, the development of tools that can be proven helpful to complex design is presently an active area of research. In this article, we advance a major improvement to the semiempirical description of lanthanide complexes: the Recife Model 1, RM1, model for the lanthanides, parameterized for the trications of Dy, Ho, and Er. By representing such lanthanide in the RM1 calculation as a three-electron atom with a set of 5 d, 6 s, and 6 p semiempirical orbitals, the accuracy of the previous sparkle models, mainly concentrated on lanthanide-oxygen and lanthanide-nitrogen distances, is extended to other types of bonds in the trication complexes’ coordination polyhedra, such as lanthanide-carbon, lanthanide-chlorine, etc. This is even more important as, for example, lanthanide-carbon atom distances in the coordination polyhedra of the complexes comprise about 30% of all distances for all complexes of Dy, Ho, and Er considered. Our results indicate that the average unsigned mean error for the lanthanide-carbon distances dropped from an average of 0.30 Å, for the sparkle models, to 0.04 Å for the RM1 model for the lanthanides; for a total of 509 such distances for the set of all Dy, Ho, and Er complexes considered. A similar behavior took place for the other distances as well, such as lanthanide-chlorine, lanthanide-bromine, lanthanide, phosphorus and lanthanide-sulfur. Thus, the RM1 model for the lanthanides, being advanced in this article, broadens the range of application of semiempirical models to lanthanide complexes by including comprehensively many other types of bonds not adequately described by the previous models. PMID:24497945
Rule-based modeling and simulations of the inner kinetochore structure.
Tschernyschkow, Sergej; Herda, Sabine; Gruenert, Gerd; Döring, Volker; Görlich, Dennis; Hofmeister, Antje; Hoischen, Christian; Dittrich, Peter; Diekmann, Stephan; Ibrahim, Bashar
2013-09-01
Combinatorial complexity is a central problem when modeling biochemical reaction networks, since the association of a few components can give rise to a large variation of protein complexes. Available classical modeling approaches are often insufficient for the analysis of very large and complex networks in detail. Recently, we developed a new rule-based modeling approach that facilitates the analysis of spatial and combinatorially complex problems. Here, we explore for the first time how this approach can be applied to a specific biological system, the human kinetochore, which is a multi-protein complex involving over 100 proteins. Applying our freely available SRSim software to a large data set on kinetochore proteins in human cells, we construct a spatial rule-based simulation model of the human inner kinetochore. The model generates an estimation of the probability distribution of the inner kinetochore 3D architecture and we show how to analyze this distribution using information theory. In our model, the formation of a bridge between CenpA and an H3 containing nucleosome only occurs efficiently for higher protein concentration realized during S-phase but may be not in G1. Above a certain nucleosome distance the protein bridge barely formed pointing towards the importance of chromatin structure for kinetochore complex formation. We define a metric for the distance between structures that allow us to identify structural clusters. Using this modeling technique, we explore different hypothetical chromatin layouts. Applying a rule-based network analysis to the spatial kinetochore complex geometry allowed us to integrate experimental data on kinetochore proteins, suggesting a 3D model of the human inner kinetochore architecture that is governed by a combinatorial algebraic reaction network. This reaction network can serve as bridge between multiple scales of modeling. Our approach can be applied to other systems beyond kinetochores. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
D'Urzo, Annalisa; Konijnenberg, Albert; Rossetti, Giulia; Habchi, Johnny; Li, Jinyu; Carloni, Paolo; Sobott, Frank; Longhi, Sonia; Grandori, Rita
2015-03-01
Intrinsically disordered proteins (IDPs) form biologically active complexes that can retain a high degree of conformational disorder, escaping structural characterization by conventional approaches. An example is offered by the complex between the intrinsically disordered NTAIL domain and the phosphoprotein X domain (PXD) from measles virus (MeV). Here, distinct conformers of the complex are detected by electrospray ionization-mass spectrometry (ESI-MS) and ion mobility (IM) techniques yielding estimates for the solvent-accessible surface area (SASA) in solution and the average collision cross-section (CCS) in the gas phase. Computational modeling of the complex in solution, based on experimental constraints, provides atomic-resolution structural models featuring different levels of compactness. The resulting models indicate high structural heterogeneity. The intermolecular interactions are predominantly hydrophobic, not only in the ordered core of the complex, but also in the dynamic, disordered regions. Electrostatic interactions become involved in the more compact states. This system represents an illustrative example of a hydrophobic complex that could be directly detected in the gas phase by native mass spectrometry. This work represents the first attempt to modeling the entire NTAIL domain bound to PXD at atomic resolution.
NASA Astrophysics Data System (ADS)
Luo, Yan; Zhang, Lifeng; Li, Ming; Sridhar, Seetharaman
2018-06-01
A complex nitride of Al x Mg(1- x)N was observed in silicon steels. A thermodynamic model was developed to predict the ferrite/nitride equilibrium in the Fe-Al-Mg-N alloy system, using published binary solubility products for stoichiometric phases. The model was used to estimate the solubility product of nitride compound, equilibrium ferrite, and nitride compositions, and the amounts of each phase, as a function of steel composition and temperature. In the current model, the molar ratio Al/(Al + Mg) in the complex nitride was great due to the low dissolved magnesium in steel. For a steel containing 0.52 wt pct Als, 10 ppm T.Mg., and 20 ppm T.N. at 1100 K (827 °C), the complex nitride was expressed by Al0.99496Mg0.00504N and the solubility product of this complex nitride was 2.95 × 10-7. In addition, the solution temperature of the complex nitride increased with increasing the nitrogen and aluminum in steel. The good agreement between the prediction and the detected precipitate compositions validated the current model.
NASA Technical Reports Server (NTRS)
Adler, David S.; Roberts, William W., Jr.
1992-01-01
Techniques which use longitude-velocity diagrams to identify molecular cloud complexes in the disk of the Galaxy are investigated by means of model Galactic disks generated from N-body cloud-particle simulations. A procedure similar to the method used to reduce the low-level emission in Galactic l-v diagrams is employed to isolate complexes of emission in the model l-v diagram (LVCs) from the 'background'clouds. The LVCs produced in this manner yield a size-line-width relationship with a slope of 0.58 and a mass spectrum with a slope of 1.55, consistent with Galactic observations. It is demonstrated that associations identified as LVCs are often chance superpositions of clouds spread out along the line of sight in the disk of the model system. This indicates that the l-v diagram cannot be used to unambiguously determine the location of molecular cloud complexes in the model Galactic disk. The modeling results also indicate that the existence of a size-line-width relationship is not a reliable indicator of the physical nature of cloud complexes, in particular, whether the complexes are gravitationally bound objects.
Complex fuzzy soft expert sets
NASA Astrophysics Data System (ADS)
Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak
2017-04-01
Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.
Application of Complex Adaptive Systems in Portfolio Management
ERIC Educational Resources Information Center
Su, Zheyuan
2017-01-01
Simulation-based methods are becoming a promising research tool in financial markets. A general Complex Adaptive System can be tailored to different application scenarios. Based on the current research, we built two models that would benefit portfolio management by utilizing Complex Adaptive Systems (CAS) in Agent-based Modeling (ABM) approach.…
Rethinking Validation in Complex High-Stakes Assessment Contexts
ERIC Educational Resources Information Center
Koch, Martha J.; DeLuca, Christopher
2012-01-01
In this article we rethink validation within the complex contexts of high-stakes assessment. We begin by considering the utility of existing models for validation and argue that these models tend to overlook some of the complexities inherent to assessment use, including the multiple interpretations of assessment purposes and the potential…
Elementary Teachers' Selection and Use of Visual Models
ERIC Educational Resources Information Center
Lee, Tammy D.; Jones, M. Gail
2018-01-01
As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service…
Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention
ERIC Educational Resources Information Center
Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David
2016-01-01
Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…
Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models
Liu, Ziyue; Cappola, Anne R.; Crofford, Leslie J.; Guo, Wensheng
2013-01-01
The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls. PMID:24729646
Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models.
Liu, Ziyue; Cappola, Anne R; Crofford, Leslie J; Guo, Wensheng
2014-01-01
The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, J.B.; Christensen, T.H.
1999-11-01
Complexation of cadmium (Cd), nickel (Ni), and zinc (Zn) by dissolved organic carbon (DOC) in leachate-polluted groundwater was measured using a resin equilibrium method and an aquifer material sorption technique. The first method is commonly used in complexation studies, while the second method better represents aquifer conditions. The two approaches gave similar results. Metal-DOC complexation was measured over a range of DOC concentrations using the resin equilibrium method, and the results were compared to simulations made by two speciation models containing default databases on metal-DOC complexes (WHAM and MINTEQA2). The WHAM model gave reasonable estimates of Cd and Ni complexationmore » by DOC for both leachate-polluted groundwater samples. The estimated effect of complexation differed less than 50% from the experimental values corresponding to a deviation on the activity of the free metal ion of a factor of 2.5. The effect of DOC complexation for Zn was largely overestimated by the WHAM model, and it was found that using a binding constant of 1.7 instead of the default value of 1.3 would improve the fit between the simulations and experimental data. The MINTEQA2 model gave reasonable predictions of the complexation of Cd and Zn by DOC, whereas deviations in the estimated activity of the free Ni{sup 2+} ion as compared to experimental results are up to a factor of 5.« less
Westö, Johan; May, Patrick J C
2018-05-02
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multi-filter linear-nonlinear (LN) models and context models. Models are, however, never correct and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: First, we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions. Second, we evaluate context models and multi-filter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multi-filter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multi-filter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior.
NASA Astrophysics Data System (ADS)
Messiaen, A. M.
1996-11-01
A new discharge regime has been observed on the pumped limiter tokamak TEXTOR-94 in the presence of strong radiation cooling and for different scenarii of additional hearing. The radiated power fraction (up to 90%) is feedback controlled by the amount of Ne seeded in the edge. This regime meets many of the necessary conditions for a future fusion reactor. Energy confinement increases with increasing densities (reminiscent of the Z-mode obtained at ISX-B) and as good as ELM-free H-mode confinement (enhancement factor verus ITERH93-P up to 1.2) is obtained at high densities (up to 1.2 times the Greenwald limit) with peaked density profiles showing a peaking factor of about 2 and central density values around 10^14cm-3. In experiments where the energy content of the discharges is kept constant with an energy feedback loop acting on the amount of ICRH power, stable and stationary discharges are obtained for intervals of more than 5s, i.e. 100 times the energy confinement time or about equal to the skin resistive time, even with the cylindrical q_α as low as 2.8 β-values up to the β-limits of TEXTOR-94 are achieved (i.e. β n ≈ 2 of and β p ≈ 1.5) and the figure of merit for ignition margin f_Hqa in these discharges can be as high as 0.7. No detrimental effects of the seeded impurity on the reactivity of the plasma are observed. He removal in these discharges has also been investigated. [1] Laboratoire de Physique des Plasmas-Laboratorium voor Plasmafysica, Association "EURATOM-Belgian State", Ecole Royale Militaire-Koninklijke Militaire School, Brussels, Belgium [2] Institut für Plasmaphysik, Forschungszentrum Jülich, GmbH, Association "EURATOM-KFA", Jülich, Germany [3] Fusion Energy Research Program, Mechanical Engineering Division, University of California at San Diego, La Jolla, USA [4] FOM Institüt voor Plasmafysica Rijnhuizen, Associatie "FOM-EURATOM", Nieuwegein, The Netherlands [*] Researcher at NFSR, Belgium itemize
Gunz, Philipp; Ramsier, Marissa; Kuhrig, Melanie; Hublin, Jean-Jacques; Spoor, Fred
2012-01-01
The bony labyrinth in the temporal bone houses the sensory systems of balance and hearing. While the overall structure of the semicircular canals and cochlea is similar across therian mammals, their detailed morphology varies even among closely related groups. As such, the shape of the labyrinth carries valuable functional and phylogenetic information. Here we introduce a new, semilandmark-based three-dimensional geometric morphometric approach to shape analysis of the labyrinth, as a major improvement upon previous metric studies based on linear measurements and angles. We first provide a detailed, step-by-step description of the measurement protocol. Subsequently, we test our approach using a geographically diverse sample of 50 recent modern humans and 30 chimpanzee specimens belonging to Pan troglodytes troglodytes and P. t. verus. Our measurement protocol can be applied to CT scans of different spatial resolutions because it primarily quantifies the midline skeleton of the bony labyrinth. Accurately locating the lumen centre of the semicircular canals and the cochlea is not affected by the partial volume and thresholding effects that can make the comparison of the outer border problematic. After virtually extracting the bony labyrinth from CT scans of the temporal bone, we computed its midline skeleton by thinning the encased volume. On the resulting medial axes of the semicircular canals and cochlea we placed a sequence of semilandmarks. After Procrustes superimposition, the shape coordinates were analysed using multivariate statistics. We found statistically significant shape differences between humans and chimpanzees which corroborate previous analyses of the labyrinth based on traditional measurements. As the geometric relationship among the semilandmark coordinates was preserved throughout the analysis, we were able to quantify and visualize even small-scale shape differences. Notably, our approach made it possible to detect and visualize subtle, yet statistically significant (P = 0.009), differences between two chimpanzee subspecies in the shape of their semicircular canals. The ability to discriminate labyrinth shape at the subspecies level demonstrates that the approach presented here has great potential in future taxonomic studies of fossil specimens. PMID:22404255
Using multi-criteria analysis of simulation models to understand complex biological systems
Maureen C. Kennedy; E. David Ford
2011-01-01
Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...
Analysis of Immune Complex Structure by Statistical Mechanics and Light Scattering Techniques.
NASA Astrophysics Data System (ADS)
Busch, Nathan Adams
1995-01-01
The size and structure of immune complexes determine their behavior in the immune system. The chemical physics of the complex formation is not well understood; this is due in part to inadequate characterization of the proteins involved, and in part by lack of sufficiently well developed theoretical techniques. Understanding the complex formation will permit rational design of strategies for inhibiting tissue deposition of the complexes. A statistical mechanical model of the proteins based upon the theory of associating fluids was developed. The multipole electrostatic potential for each protein used in this study was characterized for net protein charge, dipole moment magnitude, and dipole moment direction. The binding sites, between the model antigen and antibodies, were characterized for their net surface area, energy, and position relative to the dipole moment of the protein. The equilibrium binding graphs generated with the protein statistical mechanical model compares favorably with experimental data obtained from radioimmunoassay results. The isothermal compressibility predicted by the model agrees with results obtained from dynamic light scattering. The statistical mechanics model was used to investigate association between the model antigen and selected pairs of antibodies. It was found that, in accordance to expectations from thermodynamic arguments, the highest total binding energy yielded complex distributions which were skewed to higher complex size. From examination of the simulated formation of ring structures from linear chain complexes, and from the joint shape probability surfaces, it was found that ring configurations were formed by the "folding" of linear chains until the ends are within binding distance. By comparing the single antigen/two antibody system which differ only in their respective binding site locations, it was found that binding site location influences complex size and shape distributions only when ring formation occurs. The internal potential energy of a ring complex is considerably less than that of the non-associating system; therefore the ring complexes are quite stable and show no evidence of breaking, and collapsing into smaller complexes. The ring formation will occur only in systems where the total free energy of each complex may be minimized. Thus, ring formation will occur even though entropically unfavorable conformations result if the total free energy can be minimized by doing so.
González, Janneth; Gálvez, Angela; Morales, Ludis; Barreto, George E.; Capani, Francisco; Sierra, Omar; Torres, Yolima
2013-01-01
Three-dimensional models of the alpha- and beta-1 subunits of the calcium-activated potassium channel (BK) were predicted by threading modeling. A recursive approach comprising of sequence alignment and model building based on three templates was used to build these models, with the refinement of non-conserved regions carried out using threading techniques. The complex formed by the subunits was studied by means of docking techniques, using 3D models of the two subunits, and an approach based on rigid-body structures. Structural effects of the complex were analyzed with respect to hydrogen-bond interactions and binding-energy calculations. Potential interaction sites of the complex were determined by referencing a study of the difference accessible surface area (DASA) of the protein subunits in the complex. PMID:23492851
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Improving a regional model using reduced complexity and parameter estimation.
Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Research Area 3: Mathematics (3.1 Modeling of Complex Systems)
2017-10-31
RESEARCH AREA 3: MATHEMATICS (3.1 Modeling of Complex Systems). Proposal should be directed to Dr. John Lavery The views, opinions and/or findings...so designated by other documentation. 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research ...Title: RESEARCH AREA 3: MATHEMATICS (3.1 Modeling of Complex Systems). Proposal should be directed to Dr. John Lavery Report Term: 0-Other Email
Stock, Kristin; Estrada, Marta F; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph
2016-07-01
Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models.
Stock, Kristin; Estrada, Marta F.; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E.; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C.; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph
2016-01-01
Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models. PMID:27364600
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops
NASA Astrophysics Data System (ADS)
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops.
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
NASA Astrophysics Data System (ADS)
Kissinger, Alexander; Noack, Vera; Knopf, Stefan; Konrad, Wilfried; Scheer, Dirk; Class, Holger
2017-06-01
Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the hazards associated with the geological storage of CO2. Thus, in a site-specific risk assessment, models for predicting the fate of the displaced brine are required. Practical simulation of brine displacement involves decisions regarding the complexity of the model. The choice of an appropriate level of model complexity depends on multiple criteria: the target variable of interest, the relevant physical processes, the computational demand, the availability of data, and the data uncertainty. In this study, we set up a regional-scale geological model for a realistic (but not real) onshore site in the North German Basin with characteristic geological features for that region. A major aim of this work is to identify the relevant parameters controlling saltwater intrusion in a complex structural setting and to test the applicability of different model simplifications. The model that is used to identify relevant parameters fully couples flow in shallow freshwater aquifers and deep saline aquifers. This model also includes variable-density transport of salt and realistically incorporates surface boundary conditions with groundwater recharge. The complexity of this model is then reduced in several steps, by neglecting physical processes (two-phase flow near the injection well, variable-density flow) and by simplifying the complex geometry of the geological model. The results indicate that the initial salt distribution prior to the injection of CO2 is one of the key parameters controlling shallow aquifer salinization. However, determining the initial salt distribution involves large uncertainties in the regional-scale hydrogeological parameterization and requires complex and computationally demanding models (regional-scale variable-density salt transport). In order to evaluate strategies for minimizing leakage into shallow aquifers, other target variables can be considered, such as the volumetric leakage rate into shallow aquifers or the pressure buildup in the injection horizon. Our results show that simplified models, which neglect variable-density salt transport, can reach an acceptable agreement with more complex models.
Behavior of the gypsy moth life system model and development of synoptic model formulations
J. J. Colbert; Xu Rumei
1991-01-01
Aims of the research: The gypsy moth life system model (GMLSM) is a complex model which incorporates numerous components (both biotic and abiotic) and ecological processes. It is a detailed simulation model which has much biological reality. However, it has not yet been tested with life system data. For such complex models, evaluation and testing cannot be adequately...
ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra
2011-01-01
Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817
Groundwater modelling in decision support: reflections on a unified conceptual framework
NASA Astrophysics Data System (ADS)
Doherty, John; Simmons, Craig T.
2013-11-01
Groundwater models are commonly used as basis for environmental decision-making. There has been discussion and debate in recent times regarding the issue of model simplicity and complexity. This paper contributes to this ongoing discourse. The selection of an appropriate level of model structural and parameterization complexity is not a simple matter. Although the metrics on which such selection should be based are simple, there are many competing, and often unquantifiable, considerations which must be taken into account as these metrics are applied. A unified conceptual framework is introduced and described which is intended to underpin groundwater modelling in decision support with a direct focus on matters regarding model simplicity and complexity.
Modeling the chemistry of complex petroleum mixtures.
Quann, R J
1998-01-01
Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903
How rare is complex life in the Milky Way?
Bounama, Christine; von Bloh, Werner; Franck, Siegfried
2007-10-01
An integrated Earth system model was applied to calculate the number of habitable Earth-analog planets that are likely to have developed primitive (unicellular) and complex (multicellular) life in extrasolar planetary systems. The model is based on the global carbon cycle mediated by life and driven by increasing stellar luminosity and plate tectonics. We assumed that the hypothetical primitive and complex life forms differed in their temperature limits and CO(2) tolerances. Though complex life would be more vulnerable to environmental stress, its presence would amplify weathering processes on a terrestrial planet. The model allowed us to calculate the average number of Earth-analog planets that may harbor such life by using the formation rate of Earth-like planets in the Milky Way as well as the size of a habitable zone that could support primitive and complex life forms. The number of planets predicted to bear complex life was found to be approximately 2 orders of magnitude lower than the number predicted for primitive life forms. Our model predicted a maximum abundance of such planets around 1.8 Ga ago and allowed us to calculate the average distance between potentially habitable planets in the Milky Way. If the model predictions are accurate, the future missions DARWIN (up to a probability of 65%) and TPF (up to 20%) are likely to detect at least one planet with a biosphere composed of complex life.
Influence of dissolved organic matter on the complexation of mercury under sulfidic conditions.
Miller, Carrie L; Mason, Robert P; Gilmour, Cynthia C; Heyes, Andrew
2007-04-01
The complexation of Hg under sulfidic conditions influences its bioavailability for microbial methylation. Neutral dissolved Hg-sulfide complexes are readily available to Hg-methylating bacteria in culture, and thermodynamic models predict that inorganic Hg-sulfide complexes dominate dissolved Hg speciation under natural sulfidic conditions. However, these models have not been validated in the field. To examine the complexation of Hg in natural sulfidic waters, octanol/water partitioning methods were modified for use under environmentally relevant conditions, and a centrifuge ultrafiltration technique was developed. These techniques demonstrated much lower concentrations of dissolved Hg-sulfide complexes than predicted. Furthermore, the study revealed an interaction between Hg, dissolved organic matter (DOM), and sulfide that is not captured by current thermodynamic models. Whereas Hg forms strong complexes with DOM under oxic conditions, these complexes had not been expected to form in the presence of sulfide because of the stronger affinity of Hg for sulfide relative to its affinity for DOM. The observed interaction between Hg and DOM in the presence of sulfide likely involves the formation of a DOM-Hg-sulfide complex or results from the hydrophobic partitioning of neutral Hg-sulfide complexes into the higher-molecular-weight DOM. An understanding of the mechanism of this interaction and determination of complexation coefficients for the Hg-sulfide-DOM complex are needed to adequately assess how our new finding affects Hg bioavailability, sorption, and flux.
Leder, Helmut
2017-01-01
Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications. PMID:29099832
NASA Astrophysics Data System (ADS)
Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; Davis, J. A.
2018-01-01
The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonite edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites ('spillover' effect). A series of U(VI) - Na-montmorillonite batch adsorption experiments was conducted as a function of pH, with variable U(VI), Ca, and dissolved carbonate concentrations. Based on the experimental data, a new type of surface complexation model (SCM) was developed for montmorillonite, that specifically accounts for the spillover effect using the edge surface speciation model by Tournassat et al. (2016a). The SCM allows for a prediction of U(VI) adsorption under varying chemical conditions with a minimum number of fitting parameters, not only for our own experimental results, but also for a number of published data sets. The model agreed well with many of these datasets without introducing a second site type or including the formation of ternary U(VI)-carbonato surface complexes. The model predictions were greatly impacted by utilizing analytical measurements of dissolved inorganic carbon (DIC) concentrations in individual sample solutions rather than assuming solution equilibration with a specific partial pressure of CO2, even when the gas phase was laboratory air. Because of strong aqueous U(VI)-carbonate solution complexes, the measurement of DIC concentrations was even important for systems set up in the 'absence' of CO2, due to low levels of CO2 contamination during the experiment.
NASA Astrophysics Data System (ADS)
Li, Shu-Bin; Cao, Dan-Ni; Dang, Wen-Xiu; Zhang, Lin
As a new cross-discipline, the complexity science has penetrated into every field of economy and society. With the arrival of big data, the research of the complexity science has reached its summit again. In recent years, it offers a new perspective for traffic control by using complex networks theory. The interaction course of various kinds of information in traffic system forms a huge complex system. A new mesoscopic traffic flow model is improved with variable speed limit (VSL), and the simulation process is designed, which is based on the complex networks theory combined with the proposed model. This paper studies effect of VSL on the dynamic traffic flow, and then analyzes the optimal control strategy of VSL in different network topologies. The conclusion of this research is meaningful to put forward some reasonable transportation plan and develop effective traffic management and control measures to help the department of traffic management.
The Skilled Counselor Training Model: Skills Acquisition, Self-Assessment, and Cognitive Complexity
ERIC Educational Resources Information Center
Little, Cassandra; Packman, Jill; Smaby, Marlowe H.; Maddux, Cleborne D.
2005-01-01
The authors evaluated the effectiveness of the Skilled Counselor Training Model (SCTM; M. H. Smaby, C. D. Maddux, E. Torres-Rivera, & R. Zimmick, 1999) in teaching counseling skills and in fostering counselor cognitive complexity. Counselor trainees who completed the SCTM had better counseling skills and higher levels of cognitive complexity than…
Classrooms as Complex Adaptive Systems: A Relational Model
ERIC Educational Resources Information Center
Burns, Anne; Knox, John S.
2011-01-01
In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…
Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network
NASA Astrophysics Data System (ADS)
Yang, Bin
2017-07-01
Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately
ERIC Educational Resources Information Center
Dagne, Getachew A.; Brown, C. Hendricks; Howe, George W.
2007-01-01
This article presents new methods for modeling the strength of association between multiple behaviors in a behavioral sequence, particularly those involving substantively important interaction patterns. Modeling and identifying such interaction patterns becomes more complex when behaviors are assigned to more than two categories, as is the case…
Calibration of an Unsteady Groundwater Flow Model for a Complex, Strongly Heterogeneous Aquifer
NASA Astrophysics Data System (ADS)
Curtis, Z. K.; Liao, H.; Li, S. G.; Phanikumar, M. S.; Lusch, D.
2016-12-01
Modeling of groundwater systems characterized by complex three-dimensional structure and heterogeneity remains a significant challenge. Most of today's groundwater models are developed based on relatively simple conceptual representations in favor of model calibratibility. As more complexities are modeled, e.g., by adding more layers and/or zones, or introducing transient processes, more parameters have to be estimated and issues related to ill-posed groundwater problems and non-unique calibration arise. Here, we explore the use of an alternative conceptual representation for groundwater modeling that is fully three-dimensional and can capture complex 3D heterogeneity (both systematic and "random") without over-parameterizing the aquifer system. In particular, we apply Transition Probability (TP) geostatistics on high resolution borehole data from a water well database to characterize the complex 3D geology. Different aquifer material classes, e.g., `AQ' (aquifer material), `MAQ' (marginal aquifer material'), `PCM' (partially confining material), and `CM' (confining material), are simulated, with the hydraulic properties of each material type as tuning parameters during calibration. The TP-based approach is applied to simulate unsteady groundwater flow in a large, complex, and strongly heterogeneous glacial aquifer system in Michigan across multiple spatial and temporal scales. The resulting model is calibrated to observed static water level data over a time span of 50 years. The results show that the TP-based conceptualization enables much more accurate and robust calibration/simulation than that based on conventional deterministic layer/zone based conceptual representations.
Dynamic pathway modeling of signal transduction networks: a domain-oriented approach.
Conzelmann, Holger; Gilles, Ernst-Dieter
2008-01-01
Mathematical models of biological processes become more and more important in biology. The aim is a holistic understanding of how processes such as cellular communication, cell division, regulation, homeostasis, or adaptation work, how they are regulated, and how they react to perturbations. The great complexity of most of these processes necessitates the generation of mathematical models in order to address these questions. In this chapter we provide an introduction to basic principles of dynamic modeling and highlight both problems and chances of dynamic modeling in biology. The main focus will be on modeling of s transduction pathways, which requires the application of a special modeling approach. A common pattern, especially in eukaryotic signaling systems, is the formation of multi protein signaling complexes. Even for a small number of interacting proteins the number of distinguishable molecular species can be extremely high. This combinatorial complexity is due to the great number of distinct binding domains of many receptors and scaffold proteins involved in signal transduction. However, these problems can be overcome using a new domain-oriented modeling approach, which makes it possible to handle complex and branched signaling pathways.
A framework for modelling the complexities of food and water security under globalisation
NASA Astrophysics Data System (ADS)
Dermody, Brian J.; Sivapalan, Murugesu; Stehfest, Elke; van Vuuren, Detlef P.; Wassen, Martin J.; Bierkens, Marc F. P.; Dekker, Stefan C.
2018-01-01
We present a new framework for modelling the complexities of food and water security under globalisation. The framework sets out a method to capture regional and sectoral interdependencies and cross-scale feedbacks within the global food system that contribute to emergent water use patterns. The framework integrates aspects of existing models and approaches in the fields of hydrology and integrated assessment modelling. The core of the framework is a multi-agent network of city agents connected by infrastructural trade networks. Agents receive socio-economic and environmental constraint information from integrated assessment models and hydrological models respectively and simulate complex, socio-environmental dynamics that operate within those constraints. The emergent changes in food and water resources are aggregated and fed back to the original models with minimal modification of the structure of those models. It is our conviction that the framework presented can form the basis for a new wave of decision tools that capture complex socio-environmental change within our globalised world. In doing so they will contribute to illuminating pathways towards a sustainable future for humans, ecosystems and the water they share.
Complex versus simple models: ion-channel cardiac toxicity prediction.
Mistry, Hitesh B
2018-01-01
There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
On Convergence of Development Costs and Cost Models for Complex Spaceflight Instrument Electronics
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Patel, Umeshkumar D.; Kasa, Robert L.; Hestnes, Phyllis; Brown, Tammy; Vootukuru, Madhavi
2008-01-01
Development costs of a few recent spaceflight instrument electrical and electronics subsystems have diverged from respective heritage cost model predictions. The cost models used are Grass Roots, Price-H and Parametric Model. These cost models originated in the military and industry around 1970 and were successfully adopted and patched by NASA on a mission-by-mission basis for years. However, the complexity of new instruments recently changed rapidly by orders of magnitude. This is most obvious in the complexity of representative spaceflight instrument electronics' data system. It is now required to perform intermediate processing of digitized data apart from conventional processing of science phenomenon signals from multiple detectors. This involves on-board instrument formatting of computational operands from row data for example, images), multi-million operations per second on large volumes of data in reconfigurable hardware (in addition to processing on a general purpose imbedded or standalone instrument flight computer), as well as making decisions for on-board system adaptation and resource reconfiguration. The instrument data system is now tasked to perform more functions, such as forming packets and instrument-level data compression of more than one data stream, which are traditionally performed by the spacecraft command and data handling system. It is furthermore required that the electronics box for new complex instruments is developed for one-digit watt power consumption, small size and that it is light-weight, and delivers super-computing capabilities. The conflict between the actual development cost of newer complex instruments and its electronics components' heritage cost model predictions seems to be irreconcilable. This conflict and an approach to its resolution are addressed in this paper by determining the complexity parameters, complexity index, and their use in enhanced cost model.
KRISSY: user's guide to modeling three-dimensional wind flow in complex terrain
Michael A. Fosberg; Michael L. Sestak
1986-01-01
KRISSY is a computer model for generating three-dimensional wind flows in complex terrain from data that were not or perhaps cannot be collected. The model is written in FORTRAN IV This guide describes data requirements, modeling, and output from an applications viewpoint rather than that of programming or theoretical modeling. KRISSY is designed to minimize...
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
Bhatla, Puneet; Tretter, Justin T; Ludomirsky, Achi; Argilla, Michael; Latson, Larry A; Chakravarti, Sujata; Barker, Piers C; Yoo, Shi-Joon; McElhinney, Doff B; Wake, Nicole; Mosca, Ralph S
2017-01-01
Rapid prototyping facilitates comprehension of complex cardiac anatomy. However, determining when this additional information proves instrumental in patient management remains a challenge. We describe our experience with patient-specific anatomic models created using rapid prototyping from various imaging modalities, suggesting their utility in surgical and interventional planning in congenital heart disease (CHD). Virtual and physical 3-dimensional (3D) models were generated from CT or MRI data, using commercially available software for patients with complex muscular ventricular septal defects (CMVSD) and double-outlet right ventricle (DORV). Six patients with complex anatomy and uncertainty of the optimal management strategy were included in this study. The models were subsequently used to guide management decisions, and the outcomes reviewed. 3D models clearly demonstrated the complex intra-cardiac anatomy in all six patients and were utilized to guide management decisions. In the three patients with CMVSD, one underwent successful endovascular device closure following a prior failed attempt at transcatheter closure, and the other two underwent successful primary surgical closure with the aid of 3D models. In all three cases of DORV, the models provided better anatomic delineation and additional information that altered or confirmed the surgical plan. Patient-specific 3D heart models show promise in accurately defining intra-cardiac anatomy in CHD, specifically CMVSD and DORV. We believe these models improve understanding of the complex anatomical spatial relationships in these defects and provide additional insight for pre/intra-interventional management and surgical planning.
2014-01-01
Background The Triatoma brasiliensis complex is a monophyletic group, comprising three species, one of which includes two subspecific taxa, distributed across 12 Brazilian states, in the caatinga and cerrado biomes. Members of the complex are diverse in terms of epidemiological importance, morphology, biology, ecology, and genetics. Triatoma b. brasiliensis is the most disease-relevant member of the complex in terms of epidemiology, extensive distribution, broad feeding preferences, broad ecological distribution, and high rates of infection with Trypanosoma cruzi; consequently, it is considered the principal vector of Chagas disease in northeastern Brazil. Methods We used ecological niche models to estimate potential distributions of all members of the complex, and evaluated the potential for suitable adjacent areas to be colonized; we also present first evaluations of potential for climate change-mediated distributional shifts. Models were developed using the GARP and Maxent algorithms. Results Models for three members of the complex (T. b. brasiliensis, N = 332; T. b. macromelasoma, N = 35; and T. juazeirensis, N = 78) had significant distributional predictivity; however, models for T. sherlocki and T. melanica, both with very small sample sizes (N = 7), did not yield predictions that performed better than random. Model projections onto future-climate scenarios indicated little broad-scale potential for change in the potential distribution of the complex through 2050. Conclusions This study suggests that T. b. brasiliensis is the member of the complex with the greatest distributional potential to colonize new areas: overall; however, the distribution of the complex appears relatively stable. These analyses offer key information to guide proactive monitoring and remediation activities to reduce risk of Chagas disease transmission. PMID:24886587
NASA Astrophysics Data System (ADS)
Le Maire, P.; Munschy, M.
2017-12-01
Interpretation of marine magnetic anomalies enable to perform accurate global kinematic models. Several methods have been proposed to compute the paleo-latitude of the oceanic crust as its formation. A model of the Earth's magnetic field is used to determine a relationship between the apparent inclination of the magnetization and the paleo-latitude. Usually, the estimation of the apparent inclination is qualitative, with the fit between magnetic data and forward models. We propose to apply a new method using complex algebra to obtain the apparent inclination of the magnetization of the oceanic crust. For two dimensional bodies, we rewrite Talwani's equations using complex algebra; the corresponding complex function of the complex variable, called CMA (complex magnetic anomaly) is easier to use for forward modelling and inversion of the magnetic data. This complex equation allows to visualize the data in the complex plane (Argand diagram) and offers a new way to interpret data (curves to the right of the figure (B), while the curves to the left represent the standard display of magnetic anomalies (A) for the model displayed (C) at the bottom of the figure). In the complex plane, the effect of the apparent inclination is to rotate the curves, while on the standard display the evolution of the shape of the anomaly is more complicated (figure). This innovative method gives the opportunity to study a set of magnetic profiles (provided by the Geological Survey of Norway) acquired in the Norwegian Sea, near the Jan Mayen fracture zone. In this area, the age of the oceanic crust ranges from 40 to 55 Ma and the apparent inclination of the magnetization is computed.
Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models
2015-09-12
AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-11-1-0239 5c. PROGRAM ELEMENT...developed, which has been the focus of our research. 15. SUBJECT TERMS optimization, Derivative-Free Optimization, Statistical Machine Learning 16. SECURITY
Rivas, Elena; Lang, Raymond; Eddy, Sean R
2012-02-01
The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.
Rivas, Elena; Lang, Raymond; Eddy, Sean R.
2012-01-01
The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308
Glavatskikh, Marta; Madzhidov, Timur; Solov'ev, Vitaly; Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre
2016-12-01
In this work, we report QSPR modeling of the free energy ΔG of 1 : 1 hydrogen bond complexes of different H-bond acceptors and donors. The modeling was performed on a large and structurally diverse set of 3373 complexes featuring a single hydrogen bond, for which ΔG was measured at 298 K in CCl 4 . The models were prepared using Support Vector Machine and Multiple Linear Regression, with ISIDA fragment descriptors. The marked atoms strategy was applied at fragmentation stage, in order to capture the location of H-bond donor and acceptor centers. Different strategies of model validation have been suggested, including the targeted omission of individual H-bond acceptors and donors from the training set, in order to check whether the predictive ability of the model is not limited to the interpolation of H-bond strength between two already encountered partners. Successfully cross-validating individual models were combined into a consensus model, and challenged to predict external test sets of 629 and 12 complexes, in which donor and acceptor formed single and cooperative H-bonds, respectively. In all cases, SVM models outperform MLR. The SVM consensus model performs well both in 3-fold cross-validation (RMSE=1.50 kJ/mol), and on the external test sets containing complexes with single (RMSE=3.20 kJ/mol) and cooperative H-bonds (RMSE=1.63 kJ/mol). © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-07-01
Ocean biogeochemistry (OBGC) models span a wide range of complexities from highly simplified, nutrient-restoring schemes, through nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, through to models that represent a broader trophic structure by grouping organisms as plankton functional types (PFT) based on their biogeochemical role (Dynamic Green Ocean Models; DGOM) and ecosystem models which group organisms by ecological function and trait. OBGC models are now integral components of Earth System Models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here, we present an inter-comparison of six OBGC models that were candidates for implementation within the next UK Earth System Model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the Nucleus for the European Modelling of the Ocean (NEMO) ocean general circulation model (GCM), and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform or underperform all other models across all metrics. Nonetheless, the simpler models that are easier to tune are broadly closer to observations across a number of fields, and thus offer a high-efficiency option for ESMs that prioritise high resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low resolution climate dynamics and high complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-12-01
Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Germer, S.; Bens, O.; Hüttl, R. F.
2008-12-01
The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.
Dense power-law networks and simplicial complexes
NASA Astrophysics Data System (ADS)
Courtney, Owen T.; Bianconi, Ginestra
2018-05-01
There is increasing evidence that dense networks occur in on-line social networks, recommendation networks and in the brain. In addition to being dense, these networks are often also scale-free, i.e., their degree distributions follow P (k ) ∝k-γ with γ ∈(1 ,2 ] . Models of growing networks have been successfully employed to produce scale-free networks using preferential attachment, however these models can only produce sparse networks as the numbers of links and nodes being added at each time step is constant. Here we present a modeling framework which produces networks that are both dense and scale-free. The mechanism by which the networks grow in this model is based on the Pitman-Yor process. Variations on the model are able to produce undirected scale-free networks with exponent γ =2 or directed networks with power-law out-degree distribution with tunable exponent γ ∈(1 ,2 ) . We also extend the model to that of directed two-dimensional simplicial complexes. Simplicial complexes are generalization of networks that can encode the many body interactions between the parts of a complex system and as such are becoming increasingly popular to characterize different data sets ranging from social interacting systems to the brain. Our model produces dense directed simplicial complexes with power-law distribution of the generalized out-degrees of the nodes.
Intelligent classifier for dynamic fault patterns based on hidden Markov model
NASA Astrophysics Data System (ADS)
Xu, Bo; Feng, Yuguang; Yu, Jinsong
2006-11-01
It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.
NASA Astrophysics Data System (ADS)
Yasami, Yasser; Safaei, Farshad
2018-02-01
The traditional complex network theory is particularly focused on network models in which all network constituents are dealt with equivalently, while fail to consider the supplementary information related to the dynamic properties of the network interactions. This is a main constraint leading to incorrect descriptions of some real-world phenomena or incomplete capturing the details of certain real-life problems. To cope with the problem, this paper addresses the multilayer aspects of dynamic complex networks by analyzing the properties of intrinsically multilayered co-authorship networks, DBLP and Astro Physics, and presenting a novel multilayer model of dynamic complex networks. The model examines the layers evolution (layers birth/death process and lifetime) throughout the network evolution. Particularly, this paper models the evolution of each node's membership in different layers by an Infinite Factorial Hidden Markov Model considering feature cascade, and thereby formulates the link generation process for intra-layer and inter-layer links. Although adjacency matrixes are useful to describe the traditional single-layer networks, such a representation is not sufficient to describe and analyze the multilayer dynamic networks. This paper also extends a generalized mathematical infrastructure to address the problems issued by multilayer complex networks. The model inference is performed using some Markov Chain Monte Carlo sampling strategies, given synthetic and real complex networks data. Experimental results indicate a tremendous improvement in the performance of the proposed multilayer model in terms of sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, F1-score, Matthews correlation coefficient, and accuracy for two important applications of missing link prediction and future link forecasting. The experimental results also indicate the strong predictivepower of the proposed model for the application of cascade prediction in terms of accuracy.
Formalizing the Role of Agent-Based Modeling in Causal Inference and Epidemiology
Marshall, Brandon D. L.; Galea, Sandro
2015-01-01
Calls for the adoption of complex systems approaches, including agent-based modeling, in the field of epidemiology have largely centered on the potential for such methods to examine complex disease etiologies, which are characterized by feedback behavior, interference, threshold dynamics, and multiple interacting causal effects. However, considerable theoretical and practical issues impede the capacity of agent-based methods to examine and evaluate causal effects and thus illuminate new areas for intervention. We build on this work by describing how agent-based models can be used to simulate counterfactual outcomes in the presence of complexity. We show that these models are of particular utility when the hypothesized causal mechanisms exhibit a high degree of interdependence between multiple causal effects and when interference (i.e., one person's exposure affects the outcome of others) is present and of intrinsic scientific interest. Although not without challenges, agent-based modeling (and complex systems methods broadly) represent a promising novel approach to identify and evaluate complex causal effects, and they are thus well suited to complement other modern epidemiologic methods of etiologic inquiry. PMID:25480821
Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.
Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen
2016-05-01
Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed.
Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-07-28
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.
Molecular architecture of the yeast Mediator complex
Robinson, Philip J; Trnka, Michael J; Pellarin, Riccardo; Greenberg, Charles H; Bushnell, David A; Davis, Ralph; Burlingame, Alma L; Sali, Andrej; Kornberg, Roger D
2015-01-01
The 21-subunit Mediator complex transduces regulatory information from enhancers to promoters, and performs an essential role in the initiation of transcription in all eukaryotes. Structural information on two-thirds of the complex has been limited to coarse subunit mapping onto 2-D images from electron micrographs. We have performed chemical cross-linking and mass spectrometry, and combined the results with information from X-ray crystallography, homology modeling, and cryo-electron microscopy by an integrative modeling approach to determine a 3-D model of the entire Mediator complex. The approach is validated by the use of X-ray crystal structures as internal controls and by consistency with previous results from electron microscopy and yeast two-hybrid screens. The model shows the locations and orientations of all Mediator subunits, as well as subunit interfaces and some secondary structural elements. Segments of 20–40 amino acid residues are placed with an average precision of 20 Å. The model reveals roles of individual subunits in the organization of the complex. DOI: http://dx.doi.org/10.7554/eLife.08719.001 PMID:26402457
Surface complexation modeling of americium sorption onto volcanic tuff.
Ding, M; Kelkar, S; Meijer, A
2014-10-01
Results of a surface complexation model (SCM) for americium sorption on volcanic rocks (devitrified and zeolitic tuff) are presented. The model was developed using PHREEQC and based on laboratory data for americium sorption on quartz. Available data for sorption of americium on quartz as a function of pH in dilute groundwater can be modeled with two surface reactions involving an americium sulfate and an americium carbonate complex. It was assumed in applying the model to volcanic rocks from Yucca Mountain, that the surface properties of volcanic rocks can be represented by a quartz surface. Using groundwaters compositionally representative of Yucca Mountain, americium sorption distribution coefficient (Kd, L/Kg) values were calculated as function of pH. These Kd values are close to the experimentally determined Kd values for americium sorption on volcanic rocks, decreasing with increasing pH in the pH range from 7 to 9. The surface complexation constants, derived in this study, allow prediction of sorption of americium in a natural complex system, taking into account the inherent uncertainty associated with geochemical conditions that occur along transport pathways. Published by Elsevier Ltd.
Najafpour, Mohammad Mahdi
2011-01-01
The oxygen evolving complex in photosystem II which induces the oxidation of water to dioxygen in plants, algae and certain bacteria contains a cluster of one calcium and four manganese ions. It serves as a model to split water by sunlight. Reports on the mechanism and structure of photosystem II provide a more detailed architecture of the oxygen evolving complex and the surrounding amino acids. One challenge in this field is the development of artificial model compounds to study oxygen evolution reaction outside the complicated environment of the enzyme. Calcium-manganese oxides as structural and functional models for the active site of photosystem II are explained and reviewed in this paper. Because of related structures of these calcium-manganese oxides and the catalytic centers of active site of the oxygen evolving complex of photosystem II, the study may help to understand more about mechanism of oxygen evolution by the oxygen evolving complex of photosystem II. Copyright © 2010 Elsevier B.V. All rights reserved.
Reliability analysis in interdependent smart grid systems
NASA Astrophysics Data System (ADS)
Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong
2018-06-01
Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.
Wang, Zimeng; Lee, Sung-Woo; Catalano, Jeffrey G; Lezama-Pacheco, Juan S; Bargar, John R; Tebo, Bradley M; Giammar, Daniel E
2013-01-15
The mobility of hexavalent uranium in soil and groundwater is strongly governed by adsorption to mineral surfaces. As strong naturally occurring adsorbents, manganese oxides may significantly influence the fate and transport of uranium. Models for U(VI) adsorption over a broad range of chemical conditions can improve predictive capabilities for uranium transport in the subsurface. This study integrated batch experiments of U(VI) adsorption to synthetic and biogenic MnO(2), surface complexation modeling, ζ-potential analysis, and molecular-scale characterization of adsorbed U(VI) with extended X-ray absorption fine structure (EXAFS) spectroscopy. The surface complexation model included inner-sphere monodentate and bidentate surface complexes and a ternary uranyl-carbonato surface complex, which was consistent with the EXAFS analysis. The model could successfully simulate adsorption results over a broad range of pH and dissolved inorganic carbon concentrations. U(VI) adsorption to synthetic δ-MnO(2) appears to be stronger than to biogenic MnO(2), and the differences in adsorption affinity and capacity are not associated with any substantial difference in U(VI) coordination.
Comparison of new generation low-complexity flood inundation mapping tools with a hydrodynamic model
NASA Astrophysics Data System (ADS)
Afshari, Shahab; Tavakoly, Ahmad A.; Rajib, Mohammad Adnan; Zheng, Xing; Follum, Michael L.; Omranian, Ehsan; Fekete, Balázs M.
2018-01-01
The objective of this study is to compare two new generation low-complexity tools, AutoRoute and Height Above the Nearest Drainage (HAND), with a two-dimensional hydrodynamic model (Hydrologic Engineering Center-River Analysis System, HEC-RAS 2D). The assessment was conducted on two hydrologically different and geographically distant test-cases in the United States, including the 16,900 km2 Cedar River (CR) watershed in Iowa and a 62 km2 domain along the Black Warrior River (BWR) in Alabama. For BWR, twelve different configurations were set up for each of the models, including four different terrain setups (e.g. with and without channel bathymetry and a levee), and three flooding conditions representing moderate to extreme hazards at 10-, 100-, and 500-year return periods. For the CR watershed, models were compared with a simplistic terrain setup (without bathymetry and any form of hydraulic controls) and one flooding condition (100-year return period). Input streamflow forcing data representing these hypothetical events were constructed by applying a new fusion approach on National Water Model outputs. Simulated inundation extent and depth from AutoRoute, HAND, and HEC-RAS 2D were compared with one another and with the corresponding FEMA reference estimates. Irrespective of the configurations, the low-complexity models were able to produce inundation extents similar to HEC-RAS 2D, with AutoRoute showing slightly higher accuracy than the HAND model. Among four terrain setups, the one including both levee and channel bathymetry showed lowest fitness score on the spatial agreement of inundation extent, due to the weak physical representation of low-complexity models compared to a hydrodynamic model. For inundation depth, the low-complexity models showed an overestimating tendency, especially in the deeper segments of the channel. Based on such reasonably good prediction skills, low-complexity flood models can be considered as a suitable alternative for fast predictions in large-scale hyper-resolution operational frameworks, without completely overriding hydrodynamic models' efficacy.
NASA Technical Reports Server (NTRS)
Kavi, K. M.
1984-01-01
There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.
Lim, Hooi Been; Baumann, Dirk; Li, Er-Ping
2011-03-01
Wireless body area network (WBAN) is a new enabling system with promising applications in areas such as remote health monitoring and interpersonal communication. Reliable and optimum design of a WBAN system relies on a good understanding and in-depth studies of the wave propagation around a human body. However, the human body is a very complex structure and is computationally demanding to model. This paper aims to investigate the effects of the numerical model's structure complexity and feature details on the simulation results. Depending on the application, a simplified numerical model that meets desired simulation accuracy can be employed for efficient simulations. Measurements of ultra wideband (UWB) signal propagation along a human arm are performed and compared to the simulation results obtained with numerical arm models of different complexity levels. The influence of the arm shape and size, as well as tissue composition and complexity is investigated.
Modeling Complex Cross-Systems Software Interfaces Using SysML
NASA Technical Reports Server (NTRS)
Mandutianu, Sanda; Morillo, Ron; Simpson, Kim; Liepack, Otfrid; Bonanne, Kevin
2013-01-01
The complex flight and ground systems for NASA human space exploration are designed, built, operated and managed as separate programs and projects. However, each system relies on one or more of the other systems in order to accomplish specific mission objectives, creating a complex, tightly coupled architecture. Thus, there is a fundamental need to understand how each system interacts with the other. To determine if a model-based system engineering approach could be utilized to assist with understanding the complex system interactions, the NASA Engineering and Safety Center (NESC) sponsored a task to develop an approach for performing cross-system behavior modeling. This paper presents the results of applying Model Based Systems Engineering (MBSE) principles using the System Modeling Language (SysML) to define cross-system behaviors and how they map to crosssystem software interfaces documented in system-level Interface Control Documents (ICDs).
Some Observations on the Current Status of Performing Finite Element Analyses
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Knight, Norman F., Jr; Shivakumar, Kunigal N.
2015-01-01
Aerospace structures are complex high-performance structures. Advances in reliable and efficient computing and modeling tools are enabling analysts to consider complex configurations, build complex finite element models, and perform analysis rapidly. Many of the early career engineers of today are very proficient in the usage of modern computers, computing engines, complex software systems, and visualization tools. These young engineers are becoming increasingly efficient in building complex 3D models of complicated aerospace components. However, the current trends demonstrate blind acceptance of the results of the finite element analysis results. This paper is aimed at raising an awareness of this situation. Examples of the common encounters are presented. To overcome the current trends, some guidelines and suggestions for analysts, senior engineers, and educators are offered.
Lewis, Brian A
2010-01-15
The regulation of transcription and of many other cellular processes involves large multi-subunit protein complexes. In the context of transcription, it is known that these complexes serve as regulatory platforms that connect activator DNA-binding proteins to a target promoter. However, there is still a lack of understanding regarding the function of these complexes. Why do multi-subunit complexes exist? What is the molecular basis of the function of their constituent subunits, and how are these subunits organized within a complex? What is the reason for physical connections between certain subunits and not others? In this article, I address these issues through a model of network allostery and its application to the eukaryotic RNA polymerase II Mediator transcription complex. The multiple allosteric networks model (MANM) suggests that protein complexes such as Mediator exist not only as physical but also as functional networks of interconnected proteins through which information is transferred from subunit to subunit by the propagation of an allosteric state known as conformational spread. Additionally, there are multiple distinct sub-networks within the Mediator complex that can be defined by their connections to different subunits; these sub-networks have discrete functions that are activated when specific subunits interact with other activator proteins.
NASA Astrophysics Data System (ADS)
Chandra, Sulekh; Gautam, Seema; Rajor, Hament Kumar; Bhatia, Rohit
2015-02-01
Novel Schiff's base ligand, benzil bis(5-amino-1,3,4-thiadiazole-2-thiol) was synthesized by the condensation of benzil and 5-amino-1,3,4-thiadiazole-2-thiol in 1:2 ratio. The structure of ligand was determined on the basis of elemental analyses, IR, 1H NMR, mass, and molecular modeling studies. Synthesized ligand behaved as tetradentate and coordinated to metal ion through sulfur atoms of thiol ring and nitrogen atoms of imine group. Ni(II), and Cu(II) complexes were synthesized with this nitrogen-sulfur donor (N2S2) ligand. Metal complexes were characterized by elemental analyses, molar conductance, magnetic susceptibility measurements, IR, electronic spectra, EPR, thermal, and molecular modeling studies. All the complexes showed molar conductance corresponding to non-electrolytic nature, expect [Ni(L)](NO3)2 complex, which was 1:2 electrolyte in nature. [Cu(L)(SO4)] complex may possessed square pyramidal geometry, [Ni(L)](NO3)2 complex tetrahedral and rest of the complexes six coordinated octahedral/tetragonal geometry. Newly synthesized ligand and its metal complexes were examined against the opportunistic pathogens. Results suggested that metal complexes were more biological sensitive than free ligand.
NASA Astrophysics Data System (ADS)
Kirillova, Ariadna; Prytkova, Oksana O.
2018-03-01
The article is devoted to the features of the formation of the organizational and economic model of the construction of a socio-commercial multifunctional complex for high-rise construction. Authors have given examples of high-altitude multifunctional complexes in Moscow, analyzed the advantages and disadvantages in the implementation of multifunctional complexes, stressed the need for a holistic strategic approach, allowing to take into account the prospects for the development of the city and the creation of a comfortable living environment. Based on the analysis of multifunctional complexes features, a matrix of SWOT analysis was compiled. For the development of cities and improving the quality of life of the population, it is proposed to implement a new type of multifunctional complexes of a joint social and commercial direction, including, along with the implementation of office areas - schools, polyclinics, various sports facilities and cultural and leisure centers (theatrical, dance, studio, etc.). The approach proposed in the article for developing the model is based on a comparative evaluation of the multifunctional complex project of a social and commercial direction implemented at the expense of public-private partnership in the form of a concession agreement and a commercial multifunctional complex being built at the expense of the investor. It has been proved by calculations that the obtained indicators satisfy the conditions of expediency of the proposed organizational-economic model and the project of the social and commercial multifunctional complex is effective.
Darabi Sahneh, Faryad; Scoglio, Caterina; Riviere, Jim
2013-01-01
Nanoparticle-protein corona complex formation involves absorption of protein molecules onto nanoparticle surfaces in a physiological environment. Understanding the corona formation process is crucial in predicting nanoparticle behavior in biological systems, including applications of nanotoxicology and development of nano drug delivery platforms. This paper extends the modeling work in to derive a mathematical model describing the dynamics of nanoparticle corona complex formation from population balance equations. We apply nonlinear dynamics techniques to derive analytical results for the composition of nanoparticle-protein corona complex, and validate our results through numerical simulations. The model presented in this paper exhibits two phases of corona complex dynamics. In the first phase, proteins rapidly bind to the free surface of nanoparticles, leading to a metastable composition. During the second phase, continuous association and dissociation of protein molecules with nanoparticles slowly changes the composition of the corona complex. Given sufficient time, composition of the corona complex reaches an equilibrium state of stable composition. We find analytical approximate formulae for metastable and stable compositions of corona complex. Our formulae are very well-structured to clearly identify important parameters determining corona composition. The dynamics of biocorona formation constitute vital aspect of interactions between nanoparticles and living organisms. Our results further understanding of these dynamics through quantitation of experimental conditions, modeling results for in vitro systems to better predict behavior for in vivo systems. One potential application would involve a single cell culture medium related to a complex protein medium, such as blood or tissue fluid.
NASA Astrophysics Data System (ADS)
Ridley, Moira K.; Hiemstra, Tjisse; van Riemsdijk, Willem H.; Machesky, Michael L.
2009-04-01
Acid-base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multi-component mineral-aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise, molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488-508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca 2+ and Sr 2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 1 1 0 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Předota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Bénézeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile-water interface: linking molecular and macroscopic properties. Langmuir20, 4954-4969]. Our CD modeling results are consistent with these adsorbed configurations provided adsorbed cation charge is allowed to be distributed between the surface (0-plane) and Stern plane (1-plane). Additionally, a complete description of our titration data required inclusion of outer-sphere binding, principally for Cl - which was common to all solutions, but also for Rb + and K +. These outer-sphere species were treated as point charges positioned at the Stern layer, and hence determined the Stern layer capacitance value. The modeling results demonstrate that a multi-component suite of experimental data can be successfully rationalized within a CD and MUSIC model using a Stern-based description of the EDL. Furthermore, the fitted CD values of the various inner-sphere complexes of the mono- and divalent ions can be linked to the microscopic structure of the surface complexes and other data found by spectroscopy as well as molecular dynamics (MD). For the Na + ion, the fitted CD value points to the presence of bidenate inner-sphere complexation as suggested by a recent MD study. Moreover, its MD dominance quantitatively agrees with the CD model prediction. For Rb +, the presence of a tetradentate complex, as found by spectroscopy, agreed well with the fitted CD and its predicted presence was quantitatively in very good agreement with the amount found by spectroscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Mora K.; Hiemstra, T; Van Riemsdijk, Willem H.
Acid base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multicomponent mineral aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise,more » molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488 508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca2+ and Sr2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 110 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Pr edota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Be ne zeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile water interface: linking molecular and macroscopic properties. Langmuir 20, 4954 4969]. Our CD modeling results are consistent with these adsorbed configurations provided adsorbed cation charge is allowed to be distributed between the surface (0-plane) and Stern plane (1-plane). Additionally, a complete description of our titration data required inclusion of outer-sphere binding, principally for Cl which was common to all solutions, but also for Rb+ and K+. These outer-sphere species were treated as point charges positioned at the Stern layer, and hence determined the Stern layer capacitance value. The modeling results demonstrate that a multi-component suite of experimental data can be successfully rationalized within a CD and MUSIC model using a Stern-based description of the EDL. Furthermore, the fitted CD values of the various inner-sphere complexes of the mono- and divalent ions can be linked to the microscopic structure of the surface complexes and other data found by spectroscopy as well as molecular dynamics (MD). For the Na+ ion, the fitted CD value points to the presence of bidenate inner-sphere complexation as suggested by a recent MD study. Moreover, its MD dominance quantitatively agrees with the CD model prediction. For Rb+, the presence of a tetradentate complex, as found by spectroscopy, agreed well with the fitted CD and its predicted presence was quantitatively in very good agreement with the amount found by spectroscopy.« less
A probabilistic process model for pelagic marine ecosystems informed by Bayesian inverse analysis
Marine ecosystems are complex systems with multiple pathways that produce feedback cycles, which may lead to unanticipated effects. Models abstract this complexity and allow us to predict, understand, and hypothesize. In ecological models, however, the paucity of empirical data...
NASA Astrophysics Data System (ADS)
Carlsohn, Elisabet; Ångström, Jonas; Emmett, Mark R.; Marshall, Alan G.; Nilsson, Carol L.
2004-05-01
Chemical cross-linking of proteins is a well-established method for structural mapping of small protein complexes. When combined with mass spectrometry, cross-linking can reveal protein topology and identify contact sites between the peptide surfaces. When applied to surface-exposed proteins from pathogenic organisms, the method can reveal structural details that are useful in vaccine design. In order to investigate the possibilities of applying cross-linking on larger protein complexes, we selected the urease enzyme from Helicobacter pylori as a model. This membrane-associated protein complex consists of two subunits: [alpha] (26.5 kDa) and [beta] (61.7 kDa). Three ([alpha][beta]) heterodimers form a trimeric ([alpha][beta])3 assembly which further associates into a unique dodecameric 1.1 MDa complex composed of four ([alpha][beta])3 units. Cross-linked peptides from trypsin-digested urease complex were analyzed by Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) and molecular modeling. Two potential cross-linked peptides (present in the cross-linked sample but undetectable in [alpha], [beta], and native complex) were assigned. Molecular modeling of urease [alpha][beta] complex and trimeric urease units ([alpha][beta])3 revealed a linkage site between the [alpha]-subunit and the [beta]-subunit, and an internal cross-linkage in the [beta]-subunit.
Ceruloplasmin: Macromolecular Assemblies with Iron-Containing Acute Phase Proteins
Samygina, Valeriya R.; Sokolov, Alexey V.; Bourenkov, Gleb; Petoukhov, Maxim V.; Pulina, Maria O.; Zakharova, Elena T.; Vasilyev, Vadim B.; Bartunik, Hans; Svergun, Dmitri I.
2013-01-01
Copper-containing ferroxidase ceruloplasmin (Cp) forms binary and ternary complexes with cationic proteins lactoferrin (Lf) and myeloperoxidase (Mpo) during inflammation. We present an X-ray crystal structure of a 2Cp-Mpo complex at 4.7 Å resolution. This structure allows one to identify major protein–protein interaction areas and provides an explanation for a competitive inhibition of Mpo by Cp and for the activation of p-phenylenediamine oxidation by Mpo. Small angle X-ray scattering was employed to construct low-resolution models of the Cp-Lf complex and, for the first time, of the ternary 2Cp-2Lf-Mpo complex in solution. The SAXS-based model of Cp-Lf supports the predicted 1∶1 stoichiometry of the complex and demonstrates that both lobes of Lf contact domains 1 and 6 of Cp. The 2Cp-2Lf-Mpo SAXS model reveals the absence of interaction between Mpo and Lf in the ternary complex, so Cp can serve as a mediator of protein interactions in complex architecture. Mpo protects antioxidant properties of Cp by isolating its sensitive loop from proteases. The latter is important for incorporation of Fe3+ into Lf, which activates ferroxidase activity of Cp and precludes oxidation of Cp substrates. Our models provide the structural basis for possible regulatory role of these complexes in preventing iron-induced oxidative damage. PMID:23843990
Uranium(VI) adsorption to ferrihydrite: Application of a surface complexation model
Waite, T.D.; Davis, J.A.; Payne, T.E.; Waychunas, G.A.; Xu, N.
1994-01-01
A study of U(VI) adsorption by ferrihydrite was conducted over a wide range of U(VI) concentrations, pH, and at two partial pressures of carbon dioxide. A two-site (strong- and weak-affinity sites, FesOH and FewOH, respectively) surface complexation model was able to describe the experimental data well over a wide range of conditions, with only one species formed with each site type: an inner-sphere, mononuclear, bidentate complex of the type (FeO2)UO2. The existence of such a surface species was supported by results of uranium EXAFS spectroscopy performed on two samples with U(VI) adsorption density in the upper range observed in this study (10 and 18% occupancy of total surface sites). Adsorption data in the alkaline pH range suggested the existence of a second surface species, modeled as a ternary surface complex with UO2CO30 binding to a bidentate surface site. Previous surface complexation models for U(VI) adsorption have proposed surface species that are identical to the predominant aqueous species, e.g., multinuclear hydrolysis complexes or several U(VI)-carbonate complexes. The results demonstrate that the speciation of adsorbed U(VI) may be constrained by the coordination environment at the surface, giving rise to surface speciation for U(VI) that is significantly less complex than aqueous speciation.
Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien
2012-01-01
Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.
Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.
2012-03-01
Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty inmore » the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.« less
Toward Modeling the Intrinsic Complexity of Test Problems
ERIC Educational Resources Information Center
Shoufan, Abdulhadi
2017-01-01
The concept of intrinsic complexity explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a model for the intrinsic complexity of a problem class in terms of…
[Take] and the ASL Verb Complex: An Autolexical Account
ERIC Educational Resources Information Center
Metlay, Donald S.
2012-01-01
This dissertation will show how linguistic description and an Autolexical account of the bound verb root [take] shed a light on the nature of complex verb constructions in American Sign Language (ASL). This is accomplished by creating a new ASL Verb Complex Model unifying all verbs into one category of VERB. This model also accounts for a variety…
As part of its continuing development and evaluation, the QUIC model (Quick Urban & Industrial Complex) was used to study flow and dispersion in complex terrain for two cases. First, for a small area of lower Manhattan near the World Trade Center site, comparisons were made bet...
As part of its continuing development and evaluation, the QUIC model (Quick Urban & Industrial Complex) was used to study flow and dispersion in complex terrain for two cases. First, for a small area of lower Manhattan near the World Trade Center site, comparisons were made bet...
Plant metabolic modeling: achieving new insight into metabolism and metabolic engineering.
Baghalian, Kambiz; Hajirezaei, Mohammad-Reza; Schreiber, Falk
2014-10-01
Models are used to represent aspects of the real world for specific purposes, and mathematical models have opened up new approaches in studying the behavior and complexity of biological systems. However, modeling is often time-consuming and requires significant computational resources for data development, data analysis, and simulation. Computational modeling has been successfully applied as an aid for metabolic engineering in microorganisms. But such model-based approaches have only recently been extended to plant metabolic engineering, mainly due to greater pathway complexity in plants and their highly compartmentalized cellular structure. Recent progress in plant systems biology and bioinformatics has begun to disentangle this complexity and facilitate the creation of efficient plant metabolic models. This review highlights several aspects of plant metabolic modeling in the context of understanding, predicting and modifying complex plant metabolism. We discuss opportunities for engineering photosynthetic carbon metabolism, sucrose synthesis, and the tricarboxylic acid cycle in leaves and oil synthesis in seeds and the application of metabolic modeling to the study of plant acclimation to the environment. The aim of the review is to offer a current perspective for plant biologists without requiring specialized knowledge of bioinformatics or systems biology. © 2014 American Society of Plant Biologists. All rights reserved.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
Plant Metabolic Modeling: Achieving New Insight into Metabolism and Metabolic Engineering
Baghalian, Kambiz; Hajirezaei, Mohammad-Reza; Schreiber, Falk
2014-01-01
Models are used to represent aspects of the real world for specific purposes, and mathematical models have opened up new approaches in studying the behavior and complexity of biological systems. However, modeling is often time-consuming and requires significant computational resources for data development, data analysis, and simulation. Computational modeling has been successfully applied as an aid for metabolic engineering in microorganisms. But such model-based approaches have only recently been extended to plant metabolic engineering, mainly due to greater pathway complexity in plants and their highly compartmentalized cellular structure. Recent progress in plant systems biology and bioinformatics has begun to disentangle this complexity and facilitate the creation of efficient plant metabolic models. This review highlights several aspects of plant metabolic modeling in the context of understanding, predicting and modifying complex plant metabolism. We discuss opportunities for engineering photosynthetic carbon metabolism, sucrose synthesis, and the tricarboxylic acid cycle in leaves and oil synthesis in seeds and the application of metabolic modeling to the study of plant acclimation to the environment. The aim of the review is to offer a current perspective for plant biologists without requiring specialized knowledge of bioinformatics or systems biology. PMID:25344492
Coarse-grained molecular dynamics simulations for giant protein-DNA complexes
NASA Astrophysics Data System (ADS)
Takada, Shoji
Biomolecules are highly hierarchic and intrinsically flexible. Thus, computational modeling calls for multi-scale methodologies. We have been developing a coarse-grained biomolecular model where on-average 10-20 atoms are grouped into one coarse-grained (CG) particle. Interactions among CG particles are tuned based on atomistic interactions and the fluctuation matching algorithm. CG molecular dynamics methods enable us to simulate much longer time scale motions of much larger molecular systems than fully atomistic models. After broad sampling of structures with CG models, we can easily reconstruct atomistic models, from which one can continue conventional molecular dynamics simulations if desired. Here, we describe our CG modeling methodology for protein-DNA complexes, together with various biological applications, such as the DNA duplication initiation complex, model chromatins, and transcription factor dynamics on chromatin-like environment.
Sanjak, Jaleal S.; Long, Anthony D.; Thornton, Kevin R.
2017-01-01
The genetic component of complex disease risk in humans remains largely unexplained. A corollary is that the allelic spectrum of genetic variants contributing to complex disease risk is unknown. Theoretical models that relate population genetic processes to the maintenance of genetic variation for quantitative traits may suggest profitable avenues for future experimental design. Here we use forward simulation to model a genomic region evolving under a balance between recurrent deleterious mutation and Gaussian stabilizing selection. We consider multiple genetic and demographic models, and several different methods for identifying genomic regions harboring variants associated with complex disease risk. We demonstrate that the model of gene action, relating genotype to phenotype, has a qualitative effect on several relevant aspects of the population genetic architecture of a complex trait. In particular, the genetic model impacts genetic variance component partitioning across the allele frequency spectrum and the power of statistical tests. Models with partial recessivity closely match the minor allele frequency distribution of significant hits from empirical genome-wide association studies without requiring homozygous effect sizes to be small. We highlight a particular gene-based model of incomplete recessivity that is appealing from first principles. Under that model, deleterious mutations in a genomic region partially fail to complement one another. This model of gene-based recessivity predicts the empirically observed inconsistency between twin and SNP based estimated of dominance heritability. Furthermore, this model predicts considerable levels of unexplained variance associated with intralocus epistasis. Our results suggest a need for improved statistical tools for region based genetic association and heritability estimation. PMID:28103232
DOE Office of Scientific and Technical Information (OSTI.GOV)
Celia, Michael A.
This report documents the accomplishments achieved during the project titled “Model complexity and choice of model approaches for practical simulations of CO 2 injection,migration, leakage and long-term fate” funded by the US Department of Energy, Office of Fossil Energy. The objective of the project was to investigate modeling approaches of various levels of complexity relevant to geologic carbon storage (GCS) modeling with the goal to establish guidelines on choice of modeling approach.
Andrew Fall; B. Sturtevant; M.-J. Fortin; M. Papaik; F. Doyon; D. Morgan; K. Berninger; C. Messier
2010-01-01
The complexity and multi-scaled nature of forests poses significant challenges to understanding and management. Models can provide useful insights into process and their interactions, and implications of alternative management options. Most models, particularly scientific models, focus on a relatively small set of processes and are designed to operate within a...
ERIC Educational Resources Information Center
Sins, Patrick H. M.; Savelsbergh, Elwin R.; van Joolingen, Wouter R.
2005-01-01
Although computer modelling is widely advocated as a way to offer students a deeper understanding of complex phenomena, the process of modelling is rather complex itself and needs scaffolding. In order to offer adequate support, a thorough understanding of the reasoning processes students employ and of difficulties they encounter during a…
Turing instability in reaction-diffusion models on complex networks
NASA Astrophysics Data System (ADS)
Ide, Yusuke; Izuhara, Hirofumi; Machida, Takuya
2016-09-01
In this paper, the Turing instability in reaction-diffusion models defined on complex networks is studied. Here, we focus on three types of models which generate complex networks, i.e. the Erdős-Rényi, the Watts-Strogatz, and the threshold network models. From analysis of the Laplacian matrices of graphs generated by these models, we numerically reveal that stable and unstable regions of a homogeneous steady state on the parameter space of two diffusion coefficients completely differ, depending on the network architecture. In addition, we theoretically discuss the stable and unstable regions in the cases of regular enhanced ring lattices which include regular circles, and networks generated by the threshold network model when the number of vertices is large enough.
Blower, Sally; Go, Myong-Hyun
2011-07-19
Mathematical models are useful tools for understanding and predicting epidemics. A recent innovative modeling study by Stehle and colleagues addressed the issue of how complex models need to be to ensure accuracy. The authors collected data on face-to-face contacts during a two-day conference. They then constructed a series of dynamic social contact networks, each of which was used to model an epidemic generated by a fast-spreading airborne pathogen. Intriguingly, Stehle and colleagues found that increasing model complexity did not always increase accuracy. Specifically, the most detailed contact network and a simplified version of this network generated very similar results. These results are extremely interesting and require further exploration to determine their generalizability.
Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, T.J.; Long, K.S.; Sayre, J.A.
1994-08-01
The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.
Trends in modeling Biomedical Complex Systems
Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro
2009-01-01
In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068
Gravitational orientation of the orbital complex, Salyut-6--Soyuz
NASA Technical Reports Server (NTRS)
Grecho, G. M.; Sarychev, V. A.; Legostayev, V. P.; Sazonov, V. V.; Gansvind, I. N.
1983-01-01
A simple mathematical model is proposed for the Salyut-6-Soyuz orbital complex motion with respect to the center of mass under the one-axis gravity-gradient orientation regime. This model was used for processing the measurements of the orbital complex motion parameters when the above orientation region was implemented. Some actual satellite motions are simulated and the satellite's aerodynamic parameters are determined. Estimates are obtained for the accuracy of measurements as well as that of the mathematical model.
On the robustness of complex heterogeneous gene expression networks.
Gómez-Gardeñes, Jesús; Moreno, Yamir; Floría, Luis M
2005-04-01
We analyze a continuous gene expression model on the underlying topology of a complex heterogeneous network. Numerical simulations aimed at studying the chaotic and periodic dynamics of the model are performed. The results clearly indicate that there is a region in which the dynamical and structural complexity of the system avoid chaotic attractors. However, contrary to what has been reported for Random Boolean Networks, the chaotic phase cannot be completely suppressed, which has important bearings on network robustness and gene expression modeling.
New approaches in agent-based modeling of complex financial systems
NASA Astrophysics Data System (ADS)
Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei
2017-12-01
Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.
NASA Astrophysics Data System (ADS)
Nakagawa, Satoshi; Kurniawan, Isman; Kodama, Koichi; Arwansyah, Muhammad Saleh; Kawaguchi, Kazutomo; Nagao, Hidemi
2018-03-01
We present a simple coarse-grained model with the molecular crowding effect in solvent to investigate the structure and dynamics of protein complexes including association and/or dissociation processes and investigate some physical properties such as the structure and the reaction rate from the viewpoint of the hydrophobic intermolecular interactions of protein complex. In the present coarse-grained model, a function depending upon the density of hydrophobic amino acid residues in a binding area of the complex is introduced, and the function involves the molecular crowding effect for the intermolecular interactions of hydrophobic amino acid residues between proteins. We propose a hydrophobic intermolecular potential energy between proteins by using the density-dependent function. The present coarse-grained model is applied to the complex of cytochrome f and plastocyanin by using the Langevin dynamics simulation to investigate some physical properties such as the complex structure, the electron transfer reaction rate constant from plastocyanin to cytochrome f and so on. We find that for proceeding the electron transfer reaction, the distance between metals in their active sites is necessary within about 18 Å. We discuss some typical complex structures formed in the present simulation in relation to the molecular crowding effect on hydrophobic interactions.
Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.
2017-05-04
The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle tracking is anticipated to evaluate if these model design considerations are similarly important for understanding the primary modeling objective - to simulate reasonable groundwater age distributions.
NASA Astrophysics Data System (ADS)
Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.
2017-12-01
Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Development of Maps of Simple and Complex Cells in the Primary Visual Cortex
Antolík, Ján; Bednar, James A.
2011-01-01
Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067
Food-web complexity, meta-community complexity and community stability.
Mougi, A; Kondoh, M
2016-04-13
What allows interacting, diverse species to coexist in nature has been a central question in ecology, ever since the theoretical prediction that a complex community should be inherently unstable. Although the role of spatiality in species coexistence has been recognized, its application to more complex systems has been less explored. Here, using a meta-community model of food web, we show that meta-community complexity, measured by the number of local food webs and their connectedness, elicits a self-regulating, negative-feedback mechanism and thus stabilizes food-web dynamics. Moreover, the presence of meta-community complexity can give rise to a positive food-web complexity-stability effect. Spatiality may play a more important role in stabilizing dynamics of complex, real food webs than expected from ecological theory based on the models of simpler food webs.
Genotypic Complexity of Fisher’s Geometric Model
Hwang, Sungmin; Park, Su-Chan; Krug, Joachim
2017-01-01
Fisher’s geometric model was originally introduced to argue that complex adaptations must occur in small steps because of pleiotropic constraints. When supplemented with the assumption of additivity of mutational effects on phenotypic traits, it provides a simple mechanism for the emergence of genotypic epistasis from the nonlinear mapping of phenotypes to fitness. Of particular interest is the occurrence of reciprocal sign epistasis, which is a necessary condition for multipeaked genotypic fitness landscapes. Here we compute the probability that a pair of randomly chosen mutations interacts sign epistatically, which is found to decrease with increasing phenotypic dimension n, and varies nonmonotonically with the distance from the phenotypic optimum. We then derive expressions for the mean number of fitness maxima in genotypic landscapes comprised of all combinations of L random mutations. This number increases exponentially with L, and the corresponding growth rate is used as a measure of the complexity of the landscape. The dependence of the complexity on the model parameters is found to be surprisingly rich, and three distinct phases characterized by different landscape structures are identified. Our analysis shows that the phenotypic dimension, which is often referred to as phenotypic complexity, does not generally correlate with the complexity of fitness landscapes and that even organisms with a single phenotypic trait can have complex landscapes. Our results further inform the interpretation of experiments where the parameters of Fisher’s model have been inferred from data, and help to elucidate which features of empirical fitness landscapes can be described by this model. PMID:28450460
GPU-accelerated depth map generation for X-ray simulations of complex CAD geometries
NASA Astrophysics Data System (ADS)
Grandin, Robert J.; Young, Gavin; Holland, Stephen D.; Krishnamurthy, Adarsh
2018-04-01
Interactive x-ray simulations of complex computer-aided design (CAD) models can provide valuable insights for better interpretation of the defect signatures such as porosity from x-ray CT images. Generating the depth map along a particular direction for the given CAD geometry is the most compute-intensive step in x-ray simulations. We have developed a GPU-accelerated method for real-time generation of depth maps of complex CAD geometries. We preprocess complex components designed using commercial CAD systems using a custom CAD module and convert them into a fine user-defined surface tessellation. Our CAD module can be used by different simulators as well as handle complex geometries, including those that arise from complex castings and composite structures. We then make use of a parallel algorithm that runs on a graphics processing unit (GPU) to convert the finely-tessellated CAD model to a voxelized representation. The voxelized representation can enable heterogeneous modeling of the volume enclosed by the CAD model by assigning heterogeneous material properties in specific regions. The depth maps are generated from this voxelized representation with the help of a GPU-accelerated ray-casting algorithm. The GPU-accelerated ray-casting method enables interactive (> 60 frames-per-second) generation of the depth maps of complex CAD geometries. This enables arbitrarily rotation and slicing of the CAD model, leading to better interpretation of the x-ray images by the user. In addition, the depth maps can be used to aid directly in CT reconstruction algorithms.
A Spatially Continuous Model of Carbohydrate Digestion and Transport Processes in the Colon
Moorthy, Arun S.; Brooks, Stephen P. J.; Kalmokoff, Martin; Eberl, Hermann J.
2015-01-01
A spatially continuous mathematical model of transport processes, anaerobic digestion and microbial complexity as would be expected in the human colon is presented. The model is a system of first-order partial differential equations with context determined number of dependent variables, and stiff, non-linear source terms. Numerical simulation of the model is used to elucidate information about the colon-microbiota complex. It is found that the composition of materials on outflow of the model does not well-describe the composition of material in other model locations, and inferences using outflow data varies according to model reactor representation. Additionally, increased microbial complexity allows the total microbial community to withstand major system perturbations in diet and community structure. However, distribution of strains and functional groups within the microbial community can be modified depending on perturbation length and microbial kinetic parameters. Preliminary model extensions and potential investigative opportunities using the computational model are discussed. PMID:26680208
Sensitivity of Precipitation in Coupled Land-Atmosphere Models
NASA Technical Reports Server (NTRS)
Neelin, David; Zeng, N.; Suarez, M.; Koster, R.
2004-01-01
The project objective was to understand mechanisms by which atmosphere-land-ocean processes impact precipitation in the mean climate and interannual variations, focusing on tropical and subtropical regions. A combination of modeling tools was used: an intermediate complexity land-atmosphere model developed at UCLA known as the QTCM and the NASA Seasonal-to-Interannual Prediction Program general circulation model (NSIPP GCM). The intermediate complexity model was used to develop hypotheses regarding the physical mechanisms and theory for the interplay of large-scale dynamics, convective heating, cloud radiative effects and land surface feedbacks. The theoretical developments were to be confronted with diagnostics from the more complex GCM to validate or modify the theory.
NASA Astrophysics Data System (ADS)
Gusev, Anatoly; Diansky, Nikolay; Zalesny, Vladimir
2010-05-01
The original program complex is proposed for the ocean circulation sigma-model, developed in the Institute of Numerical Mathematics (INM), Russian Academy of Sciences (RAS). The complex can be used in various curvilinear orthogonal coordinate systems. In addition to ocean circulation model, the complex contains a sea ice dynamics and thermodynamics model, as well as the original system of the atmospheric forcing implementation on the basis of both prescribed meteodata and atmospheric model results. This complex can be used as the oceanic block of Earth climate model as well as for solving the scientific and practical problems concerning the World ocean and its separate oceans and seas. The developed program complex can be effectively used on parallel shared memory computational systems and on contemporary personal computers. On the base of the complex proposed the ocean general circulation model (OGCM) was developed. The model is realized in the curvilinear orthogonal coordinate system obtained by the conformal transformation of the standard geographical grid that allowed us to locate the system singularities outside the integration domain. The horizontal resolution of the OGCM is 1 degree on longitude, 0.5 degree on latitude, and it has 40 non-uniform sigma-levels in depth. The model was integrated for 100 years starting from the Levitus January climatology using the realistic atmospheric annual cycle calculated on the base of CORE datasets. The experimental results showed us that the model adequately reproduces the basic characteristics of large-scale World Ocean dynamics, that is in good agreement with both observational data and results of the best climatic OGCMs. This OGCM is used as the oceanic component of the new version of climatic system model (CSM) developed in INM RAS. The latter is now ready for carrying out the new numerical experiments on climate and its change modelling according to IPCC (Intergovernmental Panel on Climate Change) scenarios in the scope of the CMIP-5 (Coupled Model Intercomparison Project). On the base of the complex proposed the Pacific Ocean circulation eddy-resolving model was realized. The integration domain covers the Pacific from Equator to Bering Strait. The model horizontal resolution is 0.125 degree and it has 20 non-uniform sigma-levels in depth. The model adequately reproduces circulation large-scale structure and its variability: Kuroshio meandering, ocean synoptic eddies, frontal zones, etc. Kuroshio high variability is shown. The distribution of contaminant was simulated that is admittedly wasted near Petropavlovsk-Kamchatsky. The results demonstrate contaminant distribution structure and provide us understanding of hydrological fields formation processes in the North-West Pacific.
Complexity growth in minimal massive 3D gravity
NASA Astrophysics Data System (ADS)
Qaemmaqami, Mohammad M.
2018-01-01
We study the complexity growth by using "complexity =action " (CA) proposal in the minimal massive 3D gravity (MMG) model which is proposed for resolving the bulk-boundary clash problem of topologically massive gravity (TMG). We observe that the rate of the complexity growth for Banados-Teitelboim-Zanelli (BTZ) black hole saturates the proposed bound by physical mass of the BTZ black hole in the MMG model, when the angular momentum parameter and the inner horizon of black hole goes to zero.
[Analysis of a three-dimensional finite element model of atlas and axis complex fracture].
Tang, X M; Liu, C; Huang, K; Zhu, G T; Sun, H L; Dai, J; Tian, J W
2018-05-22
Objective: To explored the clinical application of the three-dimensional finite element model of atlantoaxial complex fracture. Methods: A three-dimensional finite element model of cervical spine (FEM/intact) was established by software of Abaqus6.12.On the basis of this model, a three-dimensional finite element model of four types of atlantoaxial complex fracture was established: C(1) fracture (Jefferson)+ C(2) fracture (type Ⅱfracture), Jefferson+ C(2) fracture(type Ⅲfracture), Jefferson+ C(2) fracture(Hangman), Jefferson+ stable C(2) fracture (FEM/fracture). The range of motion under flexion, extension, lateral bending and axial rotation were measured and compared with the model of cervical spine. Results: The three-dimensional finite element model of four types of atlantoaxial complex fracture had the same similarity and profile.The range of motion (ROM) of different segments had different changes.Compared with those in the normal model, the ROM of C(0/1) and C(1/2) in C(1) combined Ⅱ odontoid fracture model in flexion/extension, lateral bending and rotation increased by 57.45%, 29.34%, 48.09% and 95.49%, 88.52%, 36.71%, respectively.The ROM of C(0/1) and C(1/2) in C(1) combined Ⅲodontoid fracture model in flexion/extension, lateral bending and rotation increased by 47.01%, 27.30%, 45.31% and 90.38%, 27.30%, 30.0%.The ROM of C(0/1) and C(1/2) in C(1) combined Hangman fracture model in flexion/extension, lateral bending and rotation increased by 32.68%, 79.34%, 77.62% and 60.53%, 81.20%, 21.48%, respectively.The ROM of C(0/1) and C(1/2) in C(1) combined axis fracture model in flexion/extension, lateral bending and rotation increased by 15.00%, 29.30%, 8.47% and 37.87%, 75.57%, 8.30%, respectively. Conclusions: The three-dimensional finite element model can be used to simulate the biomechanics of atlantoaxial complex fracture.The ROM of atlantoaxial complex fracture is larger than nomal model, which indicates that surgical treatment should be performed.
[Design of Complex Cavity Structure in Air Route System of Automated Peritoneal Dialysis Machine].
Quan, Xiaoliang
2017-07-30
This paper introduced problems about Automated Peritoneal Dialysis machine(APD) that the lack of technical issues such as the structural design of the complex cavities. To study the flow characteristics of this special structure, the application of ANSYS CFX software is used with k-ε turbulence model as the theoretical basis of fluid mechanics. The numerical simulation of flow field simulation result in the internal model can be gotten after the complex structure model is imported into ANSYS CFX module. Then, it will present the distribution of complex cavities inside the flow field and the flow characteristics parameter, which will provide an important reference design for APD design.
Primary Care Physician Insights Into a Typology of the Complex Patient in Primary Care
Loeb, Danielle F.; Binswanger, Ingrid A.; Candrian, Carey; Bayliss, Elizabeth A.
2015-01-01
PURPOSE Primary care physicians play unique roles caring for complex patients, often acting as the hub for their care and coordinating care among specialists. To inform the clinical application of new models of care for complex patients, we sought to understand how these physicians conceptualize patient complexity and to develop a corresponding typology. METHODS We conducted qualitative in-depth interviews with internal medicine primary care physicians from 5 clinics associated with a university hospital and a community health hospital. We used systematic nonprobabilistic sampling to achieve an even distribution of sex, years in practice, and type of practice. The interviews were analyzed using a team-based participatory general inductive approach. RESULTS The 15 physicians in this study endorsed a multidimensional concept of patient complexity. The physicians perceived patients to be complex if they had an exacerbating factor—a medical illness, mental illness, socioeconomic challenge, or behavior or trait (or some combination thereof)—that complicated care for chronic medical illnesses. CONCLUSION This perspective of primary care physicians caring for complex patients can help refine models of complexity to design interventions or models of care that improve outcomes for these patients. PMID:26371266
Primary care physician insights into a typology of the complex patient in primary care.
Loeb, Danielle F; Binswanger, Ingrid A; Candrian, Carey; Bayliss, Elizabeth A
2015-09-01
Primary care physicians play unique roles caring for complex patients, often acting as the hub for their care and coordinating care among specialists. To inform the clinical application of new models of care for complex patients, we sought to understand how these physicians conceptualize patient complexity and to develop a corresponding typology. We conducted qualitative in-depth interviews with internal medicine primary care physicians from 5 clinics associated with a university hospital and a community health hospital. We used systematic nonprobabilistic sampling to achieve an even distribution of sex, years in practice, and type of practice. The interviews were analyzed using a team-based participatory general inductive approach. The 15 physicians in this study endorsed a multidimensional concept of patient complexity. The physicians perceived patients to be complex if they had an exacerbating factor-a medical illness, mental illness, socioeconomic challenge, or behavior or trait (or some combination thereof)-that complicated care for chronic medical illnesses. This perspective of primary care physicians caring for complex patients can help refine models of complexity to design interventions or models of care that improve outcomes for these patients. © 2015 Annals of Family Medicine, Inc.
Scope Complexity Options Risks Excursions (SCORE) Version 3.0 Mathematical Description.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Samberson, Jonell Nicole; Shettigar, Subhasini
The purpose of the Scope, Complexity, Options, Risks, Excursions (SCORE) model is to estimate the relative complexity of design variants of future warhead options. The results of this model allow those considering these options to understand the complexity tradeoffs between proposed warhead options. The core idea of SCORE is to divide a warhead option into a well- defined set of scope elements and then estimate the complexity of each scope element against a well understood reference system. The uncertainty associated with estimates can also be captured. A weighted summation of the relative complexity of each scope element is used tomore » determine the total complexity of the proposed warhead option or portions of the warhead option (i.e., a National Work Breakdown Structure code). The SCORE analysis process is a growing multi-organizational Nuclear Security Enterprise (NSE) effort, under the management of the NA- 12 led Enterprise Modeling and Analysis Consortium (EMAC), that has provided the data elicitation, integration and computation needed to support the out-year Life Extension Program (LEP) cost estimates included in the Stockpile Stewardship Management Plan (SSMP).« less
Schlosser, Florian; Moskaleva, Lyudmila V; Kremleva, Alena; Krüger, Sven; Rösch, Notker
2010-06-28
With a relativistic all-electron density functional method, we studied two anionic uranium(VI) carbonate complexes that are important for uranium speciation and transport in aqueous medium, the mononuclear tris(carbonato) complex [UO(2)(CO(3))(3)](4-) and the trinuclear hexa(carbonato) complex [(UO(2))(3)(CO(3))(6)](6-). Focusing on the structures in solution, we applied for the first time a full solvation treatment to these complexes. We approximated short-range effects by explicit aqua ligands and described long-range electrostatic interactions via a polarizable continuum model. Structures and vibrational frequencies of "gas-phase" models with explicit aqua ligands agree best with experiment. This is accidental because the continuum model of the solvent to some extent overestimates the electrostatic interactions of these highly anionic systems with the bulk solvent. The calculated free energy change when three mono-nuclear complexes associate to the trinuclear complex, agrees well with experiment and supports the formation of the latter species upon acidification of a uranyl carbonate solution.
Unsilencing Critical Conversations in Social-Studies Teacher Education Using Agent-Based Modeling
ERIC Educational Resources Information Center
Hostetler, Andrew; Sengupta, Pratim; Hollett, Ty
2018-01-01
In this article, we argue that when complex sociopolitical issues such as ethnocentrism and racial segregation are represented as complex, emergent systems using agent-based computational models (in short agent-based models or ABMs), discourse about these representations can disrupt social studies teacher candidates' dispositions of teaching…
ERIC Educational Resources Information Center
Kartal, Ozgul; Dunya, Beyza Aksu; Diefes-Dux, Heidi A.; Zawojewski, Judith S.
2016-01-01
Critical to many science, technology, engineering, and mathematics (STEM) career paths is mathematical modeling--specifically, the creation and adaptation of mathematical models to solve problems in complex settings. Conventional standardized measures of mathematics achievement are not structured to directly assess this type of mathematical…
From Complex to Simple: Interdisciplinary Stochastic Models
ERIC Educational Resources Information Center
Mazilu, D. A.; Zamora, G.; Mazilu, I.
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…
DOT National Transportation Integrated Search
2008-01-01
Computer simulations are often used in aviation studies. These simulation tools may require complex, high-fidelity aircraft models. Since many of the flight models used are third-party developed products, independent validation is desired prior to im...
Complex Instruction: A Model for Reaching Up--and Out
ERIC Educational Resources Information Center
Tomlinson, Carol Ann
2018-01-01
Complex Instruction is a multifaceted instructional model designed to provide highly challenging learning opportunities for students in heterogeneous classrooms. The model provides a rationale for and philosophy of creating equity of access to excellent curriculum and instruction for a broad range of learners, guidance for preparing students for…
A scalable plant-resolving radiative transfer model based on optimized GPU ray tracing
USDA-ARS?s Scientific Manuscript database
A new model for radiative transfer in participating media and its application to complex plant canopies is presented. The goal was to be able to efficiently solve complex canopy-scale radiative transfer problems while also representing sub-plant heterogeneity. In the model, individual leaf surfaces ...
MASS BALANCE MODELLING OF PCBS IN THE FOX RIVER/GREEN BAY COMPLEX
The USEPA Office of Research and Development developed and applies a multimedia, mass balance modeling approach to the Fox River/Green Bay complex to aid managers with remedial decision-making. The suite of models were applied to PCBs due to the long history of contamination and ...
NASA Astrophysics Data System (ADS)
Wagenbrenner, N. S.; Forthofer, J.; Butler, B.; Shannon, K.
2014-12-01
Near-surface wind predictions are important for a number of applications, including transport and dispersion, wind energy forecasting, and wildfire behavior. Researchers and forecasters would benefit from a wind model that could be readily applied to complex terrain for use in these various disciplines. Unfortunately, near-surface winds in complex terrain are not handled well by traditional modeling approaches. Numerical weather prediction models employ coarse horizontal resolutions which do not adequately resolve sub-grid terrain features important to the surface flow. Computational fluid dynamics (CFD) models are increasingly being applied to simulate atmospheric boundary layer (ABL) flows, especially in wind energy applications; however, the standard functionality provided in commercial CFD models is not suitable for ABL flows. Appropriate CFD modeling in the ABL requires modification of empirically-derived wall function parameters and boundary conditions to avoid erroneous streamwise gradients due to inconsistences between inlet profiles and specified boundary conditions. This work presents a new version of a near-surface wind model for complex terrain called WindNinja. The new version of WindNinja offers two options for flow simulations: 1) the native, fast-running mass-consistent method available in previous model versions and 2) a CFD approach based on the OpenFOAM modeling framework and optimized for ABL flows. The model is described and evaluations of predictions with surface wind data collected from two recent field campaigns in complex terrain are presented. A comparison of predictions from the native mass-consistent method and the new CFD method is also provided.
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition
NASA Astrophysics Data System (ADS)
Ilbeigi, Shahab; Chelidze, David
2017-11-01
Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.
McGovern, Eimear; Kelleher, Eoin; Snow, Aisling; Walsh, Kevin; Gadallah, Bassem; Kutty, Shelby; Redmond, John M; McMahon, Colin J
2017-09-01
In recent years, three-dimensional printing has demonstrated reliable reproducibility of several organs including hearts with complex congenital cardiac anomalies. This represents the next step in advanced image processing and can be used to plan surgical repair. In this study, we describe three children with complex univentricular hearts and abnormal systemic or pulmonary venous drainage, in whom three-dimensional printed models based on CT data assisted with preoperative planning. For two children, after group discussion and examination of the models, a decision was made not to proceed with surgery. We extend the current clinical experience with three-dimensional printed modelling and discuss the benefits of such models in the setting of managing complex surgical problems in children with univentricular circulation and abnormal systemic or pulmonary venous drainage.
A technique for evaluating black-footed ferret habitat
Biggins, Dean E.; Miller, Brian J.; Hanebury, Louis R.; Oakleaf, Bob; Farmer, Adrian H.; Crete, Ron; Dood, Arnold
1993-01-01
In this paper, we provide a model and step-by-step procedures for rating a prairie dog (Cynomys sp.) complex for the reintroduction of black-footed ferrets (Mustela nigripes). An important factor in the model is an estimate of the number of black-footed ferret families a prairie dog complex can support for a year; thus, the procedures prescribe how to estimate the size of a prairie dog complex and the density of prairie dogs. Other attributes of the model are qualitative: arrangement of colonies, potential for plague and canine distemper, potential for prairie dog expansion, abundance of predators, future resource conflicts and ownership stability, and public and landowner attitudes about prairie dogs and black-footed ferrets. Because of the qualitative attributes in the model, a team approach is recommended for ranking complexes of prairie dogs for black-footed ferret reintroduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsao, Jeffrey Y.; Trucano, Timothy G.; Kleban, Stephen D.
This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledgemore » gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?« less
ERIC Educational Resources Information Center
Brown, Callum
2008-01-01
Understanding the dynamic behaviour of organisations is challenging and this study uses a model of complex adaptive systems as a generative metaphor to address this challenge. The research question addressed is: How might a conceptual model of complex adaptive systems be used to assist in understanding the dynamic nature of organisations? Using an…
Cognitive Task Complexity and Written Output in Italian and French as a Foreign Language
ERIC Educational Resources Information Center
Kuiken, Folkert; Vedder, Ineke
2008-01-01
This paper reports on a study on the relationship between cognitive task complexity and linguistic performance in L2 writing. In the study, two models proposed to explain the influence of cognitive task complexity on linguistic performance in L2 are tested and compared: Skehan and Foster's Limited Attentional Capacity Model (Skehan, 1998; Skehan…
NASA Astrophysics Data System (ADS)
Wray, Timothy J.
Computational fluid dynamics (CFD) is routinely used in performance prediction and design of aircraft, turbomachinery, automobiles, and in many other industrial applications. Despite its wide range of use, deficiencies in its prediction accuracy still exist. One critical weakness is the accurate simulation of complex turbulent flows using the Reynolds-Averaged Navier-Stokes equations in conjunction with a turbulence model. The goal of this research has been to develop an eddy viscosity type turbulence model to increase the accuracy of flow simulations for mildly separated flows, flows with rotation and curvature effects, and flows with surface roughness. It is accomplished by developing a new zonal one-equation turbulence model which relies heavily on the flow physics; it is now known in the literature as the Wray-Agarwal one-equation turbulence model. The effectiveness of the new model is demonstrated by comparing its results with those obtained by the industry standard one-equation Spalart-Allmaras model and two-equation Shear-Stress-Transport k - o model and experimental data. Results for subsonic, transonic, and supersonic flows in and about complex geometries are presented. It is demonstrated that the Wray-Agarwal model can provide the industry and CFD researchers an accurate, efficient, and reliable turbulence model for the computation of a large class of complex turbulent flows.
Unexpected Results are Usually Wrong, but Often Interesting
NASA Astrophysics Data System (ADS)
Huber, M.
2014-12-01
In climate modeling, an unexpected result is usually wrong, arising from some sort of mistake. Despite the fact that we all bemoan uncertainty in climate, the field is underlain by a robust, successful body of theory and any properly conducted modeling experiment is posed and conducted within that context. Consequently, if results from a complex climate model disagree with theory or from expectations from simpler models, much skepticism is in order. But, this exposes the fundamental tension of using complex, sophisticated models. If simple models and theory were perfect there would be no reason for complex models--the entire point of sophisticated models is to see if unexpected phenomena arise as emergent properties of the system. In this talk, I will step through some paleoclimate examples, drawn from my own work, of unexpected results that emerge from complex climate models arising from mistakes of two kinds. The first kind of mistake, is what I call a 'smart mistake'; it is an intentional incorporation of assumptions, boundary conditions, or physics that is in violation of theoretical or observational constraints. The second mistake, a 'dumb mistake', is just that, an unintentional violation. Analysis of such mistaken simulations provides some potentially novel and certainly interesting insights into what is possible and right in paleoclimate modeling by forcing the reexamination of well-held assumptions and theories.
Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav
2015-03-06
The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Xingguo; Sun, Hui
2018-05-01
Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K
2016-12-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
Human systems dynamics: Toward a computational model
NASA Astrophysics Data System (ADS)
Eoyang, Glenda H.
2012-09-01
A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.
Using New Models to Analyze Complex Regularities of the World: Commentary on Musso et al. (2013)
ERIC Educational Resources Information Center
Nokelainen, Petri; Silander, Tomi
2014-01-01
This commentary to the recent article by Musso et al. (2013) discusses issues related to model fitting, comparison of classification accuracy of generative and discriminative models, and two (or more) cultures of data modeling. We start by questioning the extremely high classification accuracy with an empirical data from a complex domain. There is…
Conceptual and Developmental Analysis of Mental Models: An Example with Complex Change Problems.
ERIC Educational Resources Information Center
Poirier, Louise
Defining better implicit models of children's actions in a series of situations is of paramount importance to understanding how knowledge is constructed. The objective of this study was to analyze the implicit mental models used by children in complex change problems to understand the stability of the models and their evolution with the child's…
A toolbox for discrete modelling of cell signalling dynamics.
Paterson, Yasmin Z; Shorthouse, David; Pleijzier, Markus W; Piterman, Nir; Bendtsen, Claus; Hall, Benjamin A; Fisher, Jasmin
2018-06-18
In an age where the volume of data regarding biological systems exceeds our ability to analyse it, many researchers are looking towards systems biology and computational modelling to help unravel the complexities of gene and protein regulatory networks. In particular, the use of discrete modelling allows generation of signalling networks in the absence of full quantitative descriptions of systems, which are necessary for ordinary differential equation (ODE) models. In order to make such techniques more accessible to mainstream researchers, tools such as the BioModelAnalyzer (BMA) have been developed to provide a user-friendly graphical interface for discrete modelling of biological systems. Here we use the BMA to build a library of discrete target functions of known canonical molecular interactions, translated from ordinary differential equations (ODEs). We then show that these BMA target functions can be used to reconstruct complex networks, which can correctly predict many known genetic perturbations. This new library supports the accessibility ethos behind the creation of BMA, providing a toolbox for the construction of complex cell signalling models without the need for extensive experience in computer programming or mathematical modelling, and allows for construction and simulation of complex biological systems with only small amounts of quantitative data.
Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin
2017-01-01
The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects’ extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches. PMID:28937629
Darabi Sahneh, Faryad; Scoglio, Caterina; Riviere, Jim
2013-01-01
Background Nanoparticle-protein corona complex formation involves absorption of protein molecules onto nanoparticle surfaces in a physiological environment. Understanding the corona formation process is crucial in predicting nanoparticle behavior in biological systems, including applications of nanotoxicology and development of nano drug delivery platforms. Method This paper extends the modeling work in to derive a mathematical model describing the dynamics of nanoparticle corona complex formation from population balance equations. We apply nonlinear dynamics techniques to derive analytical results for the composition of nanoparticle-protein corona complex, and validate our results through numerical simulations. Results The model presented in this paper exhibits two phases of corona complex dynamics. In the first phase, proteins rapidly bind to the free surface of nanoparticles, leading to a metastable composition. During the second phase, continuous association and dissociation of protein molecules with nanoparticles slowly changes the composition of the corona complex. Given sufficient time, composition of the corona complex reaches an equilibrium state of stable composition. We find analytical approximate formulae for metastable and stable compositions of corona complex. Our formulae are very well-structured to clearly identify important parameters determining corona composition. Conclusion The dynamics of biocorona formation constitute vital aspect of interactions between nanoparticles and living organisms. Our results further understanding of these dynamics through quantitation of experimental conditions, modeling results for in vitro systems to better predict behavior for in vivo systems. One potential application would involve a single cell culture medium related to a complex protein medium, such as blood or tissue fluid. PMID:23741371
An Exploratory Study of the Butterfly Effect Using Agent-Based Modeling
NASA Technical Reports Server (NTRS)
Khasawneh, Mahmoud T.; Zhang, Jun; Shearer, Nevan E. N.; Rodriquez-Velasquez, Elkin; Bowling, Shannon R.
2010-01-01
This paper provides insights about the behavior of chaotic complex systems, and the sensitive dependence of the system on the initial starting conditions. How much does a small change in the initial conditions of a complex system affect it in the long term? Do complex systems exhibit what is called the "Butterfly Effect"? This paper uses an agent-based modeling approach to address these questions. An existing model from NetLogo library was extended in order to compare chaotic complex systems with near-identical initial conditions. Results show that small changes in initial starting conditions can have a huge impact on the behavior of chaotic complex systems. The term the "butterfly effect" is attributed to the work of Edward Lorenz [1]. It is used to describe the sensitive dependence of the behavior of chaotic complex systems on the initial conditions of these systems. The metaphor refers to the notion that a butterfly flapping its wings somewhere may cause extreme changes in the ecological system's behavior in the future, such as a hurricane.
Yamashita, Yuichi; Okumura, Tetsu; Okanoya, Kazuo; Tani, Jun
2011-01-01
How the brain learns and generates temporal sequences is a fundamental issue in neuroscience. The production of birdsongs, a process which involves complex learned sequences, provides researchers with an excellent biological model for this topic. The Bengalese finch in particular learns a highly complex song with syntactical structure. The nucleus HVC (HVC), a premotor nucleus within the avian song system, plays a key role in generating the temporal structures of their songs. From lesion studies, the nucleus interfacialis (NIf) projecting to the HVC is considered one of the essential regions that contribute to the complexity of their songs. However, the types of interaction between the HVC and the NIf that can produce complex syntactical songs remain unclear. In order to investigate the function of interactions between the HVC and NIf, we have proposed a neural network model based on previous biological evidence. The HVC is modeled by a recurrent neural network (RNN) that learns to generate temporal patterns of songs. The NIf is modeled as a mechanism that provides auditory feedback to the HVC and generates random noise that feeds into the HVC. The model showed that complex syntactical songs can be replicated by simple interactions between deterministic dynamics of the RNN and random noise. In the current study, the plausibility of the model is tested by the comparison between the changes in the songs of actual birds induced by pharmacological inhibition of the NIf and the changes in the songs produced by the model resulting from modification of parameters representing NIf functions. The efficacy of the model demonstrates that the changes of songs induced by pharmacological inhibition of the NIf can be interpreted as a trade-off between the effects of noise and the effects of feedback on the dynamics of the RNN of the HVC. These facts suggest that the current model provides a convincing hypothesis for the functional role of NIf–HVC interaction. PMID:21559065
Kreps, Gary L
2009-03-01
Communication is a crucial process in the effective delivery of health care services and the promotion of public health. However, there are often tremendous complexities in using communication effectively to provide the best health care, direct the adoption of health promoting behaviors, and implement evidence-based public health policies and practices. This article describes Weick's model of organizing as a powerful theory of social organizing that can help increase understanding of the communication demands of health care and health promotion. The article identifies relevant applications from the model for health communication research and practice. Weick's model of organizing is a relevant and heuristic theoretical perspective for guiding health communication research and practice. There are many potential applications of this model illustrating the complexities of effective communication in health care and health promotion. Weick's model of organizing can be used as a template for guiding both research and practice in health care and health promotion. The model illustrates the important roles that communication performs in enabling health care consumers and providers to make sense of the complexities of modern health care and health promotion, select the best strategies for responding effectively to complex health care and health promotion situations, and retain relevant information (develop organizational intelligence) for guiding future responses to complex health care and health promotion challenges.
Chandra, Sulekh; Gautam, Seema; Rajor, Hament Kumar; Bhatia, Rohit
2015-02-25
Novel Schiff's base ligand, benzil bis(5-amino-1,3,4-thiadiazole-2-thiol) was synthesized by the condensation of benzil and 5-amino-1,3,4-thiadiazole-2-thiol in 1:2 ratio. The structure of ligand was determined on the basis of elemental analyses, IR, (1)H NMR, mass, and molecular modeling studies. Synthesized ligand behaved as tetradentate and coordinated to metal ion through sulfur atoms of thiol ring and nitrogen atoms of imine group. Ni(II), and Cu(II) complexes were synthesized with this nitrogen-sulfur donor (N2S2) ligand. Metal complexes were characterized by elemental analyses, molar conductance, magnetic susceptibility measurements, IR, electronic spectra, EPR, thermal, and molecular modeling studies. All the complexes showed molar conductance corresponding to non-electrolytic nature, expect [Ni(L)](NO3)2 complex, which was 1:2 electrolyte in nature. [Cu(L)(SO4)] complex may possessed square pyramidal geometry, [Ni(L)](NO3)2 complex tetrahedral and rest of the complexes six coordinated octahedral/tetragonal geometry. Newly synthesized ligand and its metal complexes were examined against the opportunistic pathogens. Results suggested that metal complexes were more biological sensitive than free ligand. Copyright © 2014 Elsevier B.V. All rights reserved.
Pasquali, Sara; Capitoni, Enrica; Tiraboschi, Giuseppina; Alborghetti, Adriana; De Luca, Giuseppe; Di Mauro, Stefania
2017-01-01
Eleven medical care units of nine Lombardy Region hospitals organized by levels of care model or by the traditional departmental model have been analyzed, in order to evaluate if methods for complexity of patient-care evaluation represent an index factor of nursing organizational effectiveness. Survey with nine Nurses in managerial position was conducted between Nov. 2013-Jan. 2014. The following factors have been described: context and nursing care model, staffing, complexity evaluation, patient satisfaction, staff well-being. Data were processed through Microsoft Excel. Among Units analysed ,all Units in levels of care and one organized by the departmental model systematically evaluate nursing complexity. Registered Nurses (RN) and Health Care Assistants (HCA) are on average numerically higher in Units that measure complexity (0.55/ 0.49 RN, 0.38/0.23 HCA - ratio per bed). Adopted measures in relation to changes in complexity are:rewarding systems, supporting interventions, such as moving personnel within different Units or additional required working hours; reduction in number of beds is adopted when no other solution is available. Patient satisfaction is evaluated through Customer Satisfaction questionnaires. Turnover, stress and rate of absenteeism data are not available in all Units. Complexity evaluation through appropriate methods is carried out in all hospitals organized in levels of care with personalized nursing care models, though complexity is detected with different methods. No significant differences in applied managerial strategies are present. Patient's satisfaction is evaluated everywhere. Data on staffing wellbeing is scarcely available. Coordinated regional actions are recommended in order to gather comparable data for research, improve decision making and effectiveness of Nursing care.
A growth model for directed complex networks with power-law shape in the out-degree distribution
Esquivel-Gómez, J.; Stevens-Navarro, E.; Pineda-Rico, U.; Acosta-Elias, J.
2015-01-01
Many growth models have been published to model the behavior of real complex networks. These models are able to reproduce several of the topological properties of such networks. However, in most of these growth models, the number of outgoing links (i.e., out-degree) of nodes added to the network is constant, that is all nodes in the network are born with the same number of outgoing links. In other models, the resultant out-degree distribution decays as a poisson or an exponential distribution. However, it has been found that in real complex networks, the out-degree distribution decays as a power-law. In order to obtain out-degree distribution with power-law behavior some models have been proposed. This work introduces a new model that allows to obtain out-degree distributions that decay as a power-law with an exponent in the range from 0 to 1. PMID:25567141
Enhanced LOD Concepts for Virtual 3d City Models
NASA Astrophysics Data System (ADS)
Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.
2013-09-01
Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.
A comparative study of turbulence models for overset grids
NASA Technical Reports Server (NTRS)
Renze, Kevin J.; Buning, Pieter G.; Rajagopalan, R. G.
1992-01-01
The implementation of two different types of turbulence models for a flow solver using the Chimera overset grid method is examined. Various turbulence model characteristics, such as length scale determination and transition modeling, are found to have a significant impact on the computed pressure distribution for a multielement airfoil case. No inherent problem is found with using either algebraic or one-equation turbulence models with an overset grid scheme, but simulation of turbulence for multiple-body or complex geometry flows is very difficult regardless of the gridding method. For complex geometry flowfields, modification of the Baldwin-Lomax turbulence model is necessary to select the appropriate length scale in wall-bounded regions. The overset grid approach presents no obstacle to use of a one- or two-equation turbulence model. Both Baldwin-Lomax and Baldwin-Barth models have problems providing accurate eddy viscosity levels for complex multiple-body flowfields such as those involving the Space Shuttle.
Effective degrees of freedom: a flawed metaphor
Janson, Lucas; Fithian, William; Hastie, Trevor J.
2015-01-01
Summary To most applied statisticians, a fitting procedure’s degrees of freedom is synonymous with its model complexity, or its capacity for overfitting to data. In particular, it is often used to parameterize the bias-variance tradeoff in model selection. We argue that, on the contrary, model complexity and degrees of freedom may correspond very poorly. We exhibit and theoretically explore various fitting procedures for which degrees of freedom is not monotonic in the model complexity parameter, and can exceed the total dimension of the ambient space even in very simple settings. We show that the degrees of freedom for any non-convex projection method can be unbounded. PMID:26977114
Simulating complex intracellular processes using object-oriented computational modelling.
Johnson, Colin G; Goldman, Jacki P; Gullick, William J
2004-11-01
The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems
Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.
2016-01-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060
Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-01-01
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Gascuel-Odoux, Chantal; Savenije, Hubert
2014-05-01
Hydrological models are frequently characterized by what is often considered to be adequate calibration performances. In many cases, however, these models experience a substantial uncertainty and performance decrease in validation periods, thus resulting in poor predictive power. Besides the likely presence of data errors, this observation can point towards wrong or insufficient representations of the underlying processes and their heterogeneity. In other words, right results are generated for the wrong reasons. Thus ways are sought to increase model consistency and to thereby satisfy the contrasting priorities of the need a) to increase model complexity and b) to limit model equifinality. In this study a stepwise model development approach is chosen to test the value of an exhaustive and systematic combined use of hydrological signatures, expert knowledge and readily available, yet anecdotal and rarely exploited, hydrological information for increasing model consistency towards generating the right answer for the right reasons. A simple 3-box, 7 parameter, conceptual HBV-type model, constrained by 4 calibration objective functions was able to adequately reproduce the hydrograph with comparatively high values for the 4 objective functions in the 5-year calibration period. However, closer inspection of the results showed a dramatic decrease of model performance in the 5-year validation period. In addition, assessing the model's skill to reproduce a range of 20 hydrological signatures including, amongst others, the flow duration curve, the autocorrelation function and the rising limb density, showed that it could not adequately reproduce the vast majority of these signatures, indicating a lack of model consistency. Subsequently model complexity was increased in a stepwise way to allow for more process heterogeneity. To limit model equifinality, increase in complexity was counter-balanced by a stepwise application of "realism constraints", inferred from expert knowledge (e.g. unsaturated storage capacity of hillslopes should exceed the one of wetlands) and anecdotal hydrological information (e.g. long-term estimates of actual evaporation obtained from the Budyko framework and long-term estimates of baseflow contribution) to ensure that the model is well behaved with respect to the modeller's perception of the system. A total of 11 model set-ups with increased complexity and an increased number of realism constraints was tested. It could be shown that in spite of largely unchanged calibration performance, compared to the simplest set-up, the most complex model set-up (12 parameters, 8 constraints) exhibited significantly increased performance in the validation period while uncertainty did not increase. In addition, the most complex model was characterized by a substantially increased skill to reproduce all 20 signatures, indicating a more suitable representation of the system. The results suggest that a model, "well" constrained by 4 calibration objective functions may still be an inadequate representation of the system and that increasing model complexity, if counter-balanced by realism constraints, can indeed increase predictive performance of a model and its skill to reproduce a range of hydrological signatures, but that it does not necessarily result in increased uncertainty. The results also strongly illustrate the need to move away from automated model calibration towards a more general expert-knowledge driven strategy of constraining models if a certain level of model consistency is to be achieved.
Fernandes, M Marques; Scheinost, A C; Baeyens, B
2016-08-01
The credibility of long-term safety assessments of radioactive waste repositories may be greatly enhanced by a molecular level understanding of the sorption processes onto individual minerals present in the near- and far-fields. In this study we couple macroscopic sorption experiments to surface complexation modelling and spectroscopic investigations, including extended X-ray absorption fine structure (EXAFS) and time-resolved laser fluorescence spectroscopies (TRLFS), to elucidate the uptake mechanism of trivalent lanthanides and actinides (Ln/An(III)) by montmorillonite in the absence and presence of dissolved carbonate. Based on the experimental sorption isotherms for the carbonate-free system, the previously developed 2 site protolysis non electrostatic surface complexation and cation exchange (2SPNE SC/CE) model needed to be complemented with an additional surface complexation reaction onto weak sites. The fitting of sorption isotherms in the presence of carbonate required refinement of the previously published model by reducing the strong site capacity and by adding the formation of Ln/An(III)-carbonato complexes both on strong and weak sites. EXAFS spectra of selected Am samples and TRLFS spectra of selected Cm samples corroborate the model assumptions by showing the existence of different surface complexation sites and evidencing the formation of Ln/An(III) carbonate surface complexes. In the absence of carbonate and at low loadings, Ln/An(III) form strong inner-sphere complexes through binding to three Al(O,OH)6 octahedra, most likely by occupying vacant sites in the octahedral layers of montmorillonite, which are exposed on {010} and {110} edge faces. At higher loadings, Ln/An(III) binds to only one Al octahedron, forming a weaker, edge-sharing surface complex. In the presence of carbonate, we identified a ternary mono- or dicarbonato Ln/An(III) complex binding directly to one Al(O,OH)6 octahedron, revealing that type-A ternary complexes form with the one or two carbonate groups pointing away from the surface into the solution phase. Within the spectroscopically observable concentration range these complexes could only be identified on the weak sites, in line with the small strong site capacity suggested by the refined sorption model. When the solubility of carbonates was exceeded, formation of an Am carbonate hydroxide could be identified. The excellent agreement between the thermodynamic model parameters obtained by fitting the macroscopic data, and the spectroscopically identified mechanisms, demonstrates the mature state of the 2SPNE SC/CE model for predicting and quantifying the retention of Ln/An(III) elements by montmorillonite-rich clay rocks. Copyright © 2016 Elsevier Ltd. All rights reserved.
McKee, Edwin H.; Hildenbrand, Thomas G.; Anderson, Megan L.; Rowley, Peter D.; Sawyer, David A.
1999-01-01
The structural framework of Pahute Mesa, Nevada, is dominated by the Silent Canyon caldera complex, a buried, multiple collapse caldera complex. Using the boundary surface between low density Tertiary volcanogenic rocks and denser granitic and weakly metamorphosed sedimentary rocks (basement) as the outer fault surfaces for the modeled collapse caldera complex, it is postulated that the caldera complex collapsed on steeply- dipping arcuate faults two, possibly three, times following eruption of at least two major ash-flow tuffs. The caldera and most of its eruptive products are now deeply buried below the surface of Pahute Mesa. Relatively low-density rocks in the caldera complex produce one of the largest gravity lows in the western conterminous United States. Gravity modeling defines a steep sided, cup-shaped depression as much as 6,000 meters (19,800 feet) deep that is surrounded and floored by denser rocks. The steeply dipping surface located between the low-density basin fill and the higher density external rocks is considered to be the surface of the ring faults of the multiple calderas. Extrapolation of this surface upward to the outer, or topographic rim, of the Silent Canyon caldera complex defines the upper part of the caldera collapse structure. Rock units within and outside the Silent Canyon caldera complex are combined into seven hydrostratigraphic units based on their predominant hydrologic characteristics. The caldera structures and other faults on Pahute Mesa are used with the seven hydrostratigraphic units to make a three-dimensional geologic model of Pahute Mesa using the "EarthVision" (Dynamic Graphics, Inc.) modeling computer program. This method allows graphic representation of the geometry of the rocks and produces computer generated cross sections, isopach maps, and three-dimensional oriented diagrams. These products have been created to aid in visualizing and modeling the ground-water flow system beneath Pahute Mesa.
Oettl, D
2015-11-01
Dispersion modelling in complex terrain always has been challenging for modellers. Although a large number of publications are dedicated to that field, candidate methods and models for usage in regulatory applications are scarce. This is all the more true when the combined effect of topography and obstacles on pollutant dispersion has to be taken into account. In Austria, largely situated in Alpine regions, such complex situations are quite frequent. This work deals with an approach, which is in principle capable of considering both buildings and topography in simulations by combining state-of-the-art wind field models at the micro- (<1 km) and mesoscale γ (2-20 km) with a Lagrangian particle model. In order to make such complex numerical models applicable for regulatory purposes, meteorological input data for the models need to be readily derived from routine observations. Here, use was made of the traditional way to bin meteorological data based on wind direction, speed, and stability class, formerly mainly used in conjunction with Gaussian-type models. It is demonstrated that this approach leads to reasonable agreements (fractional bias < 0.1) between observed and modelled annual average concentrations in an Alpine basin with frequent low-wind-speed conditions, temperature inversions, and quite complex flow patterns, while keeping the simulation times within the frame of possibility with regard to applications in licencing procedures. However, due to the simplifications in the derivation of meteorological input data as well as several ad hoc assumptions regarding the boundary conditions of the mesoscale wind field model, the methodology is not suited for computing detailed time and space variations of pollutant concentrations.
NASA Astrophysics Data System (ADS)
Rathi, Parveen; Sharma, Kavita; Singh, Dharam Pal
2014-09-01
Macrocyclic complexes of the type [MLX]X2; where L is (C30H28N4), a macrocyclic ligand, M = Cr(III) and Fe(III) and X = Cl-, CH3COO- or NO3-, have been synthesized by template condensation reaction of 1,8-diaminonaphthalene and acetylacetone in the presence of trivalent metal salts in a methanolic medium. The complexes have been formulated as [MLX]X2 due to 1:2 electrolytic nature of these complexes. The complexes have been characterized with the help of elemental analyses, molar conductance measurements, magnetic susceptibility measurements, electronic, infrared, far infrared, Mass spectral studies and molecular modelling. Molecular weight of these complexes indicates their monomeric nature. On the basis of all these studies, a five coordinated square pyramidal geometry has been proposed for all these complexes. These metal complexes have also been screened for their in vitro antimicrobial activities.
Jia, Xiuqin; Liang, Peipeng; Shi, Lin; Wang, Defeng; Li, Kuncheng
2015-01-01
In neuroimaging studies, increased task complexity can lead to increased activation in task-specific regions or to activation of additional regions. How the brain adapts to increased rule complexity during inductive reasoning remains unclear. In the current study, three types of problems were created: simple rule induction (i.e., SI, with rule complexity of 1), complex rule induction (i.e., CI, with rule complexity of 2), and perceptual control. Our findings revealed that increased activations accompany increased rule complexity in the right dorsal lateral prefrontal cortex (DLPFC) and medial posterior parietal cortex (precuneus). A cognitive model predicted both the behavioral and brain imaging results. The current findings suggest that neural activity in frontal and parietal regions is modulated by rule complexity, which may shed light on the neural mechanisms of inductive reasoning. Copyright © 2014. Published by Elsevier Ltd.
Filho, Manoel A. M.; Dutra, José Diogo L.; Rocha, Gerd B.; Simas, Alfredo M.
2016-01-01
The RM1 quantum chemical model for the calculation of complexes of Tm(III), Yb(III) and Lu(III) is advanced. Subsequently, we tested the models by fully optimizing the geometries of 126 complexes. We then compared the optimized structures with known crystallographic ones from the Cambridge Structural Database. Results indicate that, for thulium complexes, the accuracy in terms of the distances between the lanthanide ion and its directly coordinated atoms is about 2%. Corresponding results for ytterbium and lutetium are both 3%, levels of accuracy useful for the design of lanthanide complexes, targeting their countless applications. PMID:27223475
Smad Signaling Dynamics: Insights from a Parsimonious Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiley, H. S.; Shankaran, Harish
2008-09-09
The molecular mechanisms that transmit information from cell surface receptors to the nucleus are exceedingly complex; thus, much effort has been expended in developing computational models to understand these processes. A recent study on modeling the nuclear-cytoplasmic shuttling of Smad2-Smad4 complexes in response to transforming growth factor β (TGF-β) receptor activation has provided substantial insight into how this signaling network translates the degree of TGF-β receptor activation (input) into the amount of nuclear Smad2-Smad4 complexes (output). The study addressed this question by combining a simple, mechanistic model with targeted experiments, an approach that proved particularly powerful for exploring the fundamentalmore » properties of a complex signaling network. The mathematical model revealed that Smad nuclear-cytoplasmic dynamics enables a proportional, but time-delayed coupling between the input and the output. As a result, the output can faithfully track gradual changes in the input, while the rapid input fluctuations that constitute signaling noise are dampened out.« less
Karnon, Jonathan; Haji Ali Afzali, Hossein
2014-06-01
Modelling in economic evaluation is an unavoidable fact of life. Cohort-based state transition models are most common, though discrete event simulation (DES) is increasingly being used to implement more complex model structures. The benefits of DES relate to the greater flexibility around the implementation and population of complex models, which may provide more accurate or valid estimates of the incremental costs and benefits of alternative health technologies. The costs of DES relate to the time and expertise required to implement and review complex models, when perhaps a simpler model would suffice. The costs are not borne solely by the analyst, but also by reviewers. In particular, modelled economic evaluations are often submitted to support reimbursement decisions for new technologies, for which detailed model reviews are generally undertaken on behalf of the funding body. This paper reports the results from a review of published DES-based economic evaluations. Factors underlying the use of DES were defined, and the characteristics of applied models were considered, to inform options for assessing the potential benefits of DES in relation to each factor. Four broad factors underlying the use of DES were identified: baseline heterogeneity, continuous disease markers, time varying event rates, and the influence of prior events on subsequent event rates. If relevant, individual-level data are available, representation of the four factors is likely to improve model validity, and it is possible to assess the importance of their representation in individual cases. A thorough model performance evaluation is required to overcome the costs of DES from the users' perspective, but few of the reviewed DES models reported such a process. More generally, further direct, empirical comparisons of complex models with simpler models would better inform the benefits of DES to implement more complex models, and the circumstances in which such benefits are most likely.
Modeling the propagation of mobile malware on complex networks
NASA Astrophysics Data System (ADS)
Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue
2016-08-01
In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.
Reducing the Complexity of an Agent-Based Local Heroin Market Model
Heard, Daniel; Bobashev, Georgiy V.; Morris, Robert J.
2014-01-01
This project explores techniques for reducing the complexity of an agent-based model (ABM). The analysis involved a model developed from the ethnographic research of Dr. Lee Hoffer in the Larimer area heroin market, which involved drug users, drug sellers, homeless individuals and police. The authors used statistical techniques to create a reduced version of the original model which maintained simulation fidelity while reducing computational complexity. This involved identifying key summary quantities of individual customer behavior as well as overall market activity and replacing some agents with probability distributions and regressions. The model was then extended to allow external market interventions in the form of police busts. Extensions of this research perspective, as well as its strengths and limitations, are discussed. PMID:25025132
Refiners Switch to RFG Complex Model
1998-01-01
On January 1, 1998, domestic and foreign refineries and importers must stop using the "simple" model and begin using the "complex" model to calculate emissions of volatile organic compounds (VOC), toxic air pollutants (TAP), and nitrogen oxides (NOx) from motor gasoline. The primary differences between application of the two models is that some refineries may have to meet stricter standards for the sulfur and olefin content of the reformulated gasoline (RFG) they produce and all refineries will now be held accountable for NOx emissions. Requirements for calculating emissions from conventional gasoline under the anti-dumping rule similarly change for exhaust TAP and NOx. However, the change to the complex model is not expected to result in an increase in the price premium for RFG or constrain supplies.
2012-05-21
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model awaits to be loaded onto a barge at Kennedy Space Center’s Launch Complex 39 turn basin in Florida. The model is being transported from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will be transported via barge to Texas. The model was built in Apopka, Fla., by Guard-Lee and installed at the Kennedy Space Center Visitor Complex in 1993.The model has been parked at the turn basin the past five months to allow the Kennedy Space Center Visitor Complex to begin building a new facility to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Frankie Martin
2012-05-21
CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model awaits to be loaded onto a barge at Kennedy Space Center’s Launch Complex 39 turn basin in Florida. The model is being transported from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will be transported via barge to Texas. The model was built in Apopka, Fla., by Guard-Lee and installed at the Kennedy Space Center Visitor Complex in 1993.The model has been parked at the turn basin the past five months to allow the Kennedy Space Center Visitor Complex to begin building a new facility to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Frankie Martin
Jetha, Arif; Pransky, Glenn; Hettinger, Lawrence J
2016-01-01
Work disability (WD) is characterized by variable and occasionally undesirable outcomes. The underlying determinants of WD outcomes include patterns of dynamic relationships among health, personal, organizational and regulatory factors that have been challenging to characterize, and inadequately represented by contemporary WD models. System dynamics modeling (SDM) methodology applies a sociotechnical systems thinking lens to view WD systems as comprising a range of influential factors linked by feedback relationships. SDM can potentially overcome limitations in contemporary WD models by uncovering causal feedback relationships, and conceptualizing dynamic system behaviors. It employs a collaborative and stakeholder-based model building methodology to create a visual depiction of the system as a whole. SDM can also enable researchers to run dynamic simulations to provide evidence of anticipated or unanticipated outcomes that could result from policy and programmatic intervention. SDM may advance rehabilitation research by providing greater insights into the structure and dynamics of WD systems while helping to understand inherent complexity. Challenges related to data availability, determining validity, and the extensive time and technical skill requirements for model building may limit SDM's use in the field and should be considered. Contemporary work disability (WD) models provide limited insight into complexity associated with WD processes. System dynamics modeling (SDM) has the potential to capture complexity through a stakeholder-based approach that generates a simulation model consisting of multiple feedback loops. SDM may enable WD researchers and practitioners to understand the structure and behavior of the WD system as a whole, and inform development of improved strategies to manage straightforward and complex WD cases.
Surface structural ion adsorption modeling of competitive binding of oxyanions by metal (hydr)oxides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiemstra, T.; Riemsdijk, W.H. van
1999-02-01
An important challenge in surface complexation models (SCM) is to connect the molecular microscopic reality to macroscopic adsorption phenomena. This study elucidates the primary factor controlling the adsorption process by analyzing the adsorption and competition of PO{sub 4}, AsO{sub 4}, and SeO{sub 3}. The authors show that the structure of the surface-complex acting in the dominant electrostatic field can be ascertained as the primary controlling adsorption factor. The surface species of arsenate are identical with those of phosphate and the adsorption behavior is very similar. On the basis of the selenite adsorption, The authors show that the commonly used 1pKmore » models are incapable to incorporate in the adsorption modeling the correct bidentate binding mechanism found by spectroscopy. The use of the bidentate mechanism leads to a proton-oxyanion ratio and corresponding pH dependence that are too large. The inappropriate intrinsic charge attribution to the primary surface groups and the condensation of the inner sphere surface complex to a point charge are responsible for this behavior of commonly used 2pK models. Both key factors are differently defined in the charge distributed multi-site complexation (CD-MUSIC) model and are based in this model on a surface structural approach. The CD-MUSIC model can successfully describe the macroscopic adsorption phenomena using the surface speciation and binding mechanisms as found by spectroscopy. The model is also able to predict the anion competition well. The charge distribution in the interface is in agreement with the observed structure of surface complexes.« less
Hybrid modeling and empirical analysis of automobile supply chain network
NASA Astrophysics Data System (ADS)
Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying
2017-05-01
Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.
Frequency analysis of stress relaxation dynamics in model asphalts
NASA Astrophysics Data System (ADS)
Masoori, Mohammad; Greenfield, Michael L.
2014-09-01
Asphalt is an amorphous or semi-crystalline material whose mechanical performance relies on viscoelastic responses to applied strain or stress. Chemical composition and its effect on the viscoelastic properties of model asphalts have been investigated here by computing complex modulus from molecular dynamics simulation results for two different model asphalts whose compositions each resemble the Strategic Highway Research Program AAA-1 asphalt in different ways. For a model system that contains smaller molecules, simulation results for storage and loss modulus at 443 K reach both the low and high frequency scaling limits of the Maxwell model. Results for a model system composed of larger molecules (molecular weights 300-900 g/mol) with longer branches show a quantitatively higher complex modulus that decreases significantly as temperature increases over 400-533 K. Simulation results for its loss modulus approach the low frequency scaling limit of the Maxwell model at only the highest temperature simulated. A Black plot or van Gurp-Palman plot of complex modulus vs. phase angle for the system of larger molecules suggests some overlap among results at different temperatures for less high frequencies, with an interdependence consistent with the empirical Christensen-Anderson-Marasteanu model. Both model asphalts are thermorheologically complex at very high frequencies, where they show a loss peak that appears to be independent of temperature and density.
A 3D puzzle approach to building protein-DNA structures.
Hinton, Deborah M
2017-03-15
Despite recent advances in structural analysis, it is still challenging to obtain a high-resolution structure for a complex of RNA polymerase, transcriptional factors, and DNA. However, using biochemical constraints, 3D printed models of available structures, and computer modeling, one can build biologically relevant models of such supramolecular complexes.
Evaluation of 2D shallow-water model for spillway flow with a complex geometry
USDA-ARS?s Scientific Manuscript database
Although the two-dimensional (2D) shallow water model is formulated based on several assumptions such as hydrostatic pressure distribution and vertical velocity is negligible, as a simple alternative to the complex 3D model, it has been used to compute water flows in which these assumptions may be ...
40 CFR 80.65 - General requirements for refiners and importers.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., 1995 through December 31, 1997, either as being subject to the simple model standards, or to the complex model standards; (v) For each of the following parameters, either gasoline or RBOB which meets the...; (B) NOX emissions performance in the case of gasoline certified using the complex model. (C) Benzene...
40 CFR 80.65 - General requirements for refiners and importers.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., 1995 through December 31, 1997, either as being subject to the simple model standards, or to the complex model standards; (v) For each of the following parameters, either gasoline or RBOB which meets the...; (B) NOX emissions performance in the case of gasoline certified using the complex model. (C) Benzene...
USDA-ARS?s Scientific Manuscript database
Complex watershed simulation models are powerful tools that can help scientists and policy-makers address challenging topics, such as land use management and water security. In the Western Lake Erie Basin (WLEB), complex hydrological models have been applied at various scales to help describe relat...
Tips on Creating Complex Geometry Using Solid Modeling Software
ERIC Educational Resources Information Center
Gow, George
2008-01-01
Three-dimensional computer-aided drafting (CAD) software, sometimes referred to as "solid modeling" software, is easy to learn, fun to use, and becoming the standard in industry. However, many users have difficulty creating complex geometry with the solid modeling software. And the problem is not entirely a student problem. Even some teachers and…
Modeling Structure and Dynamics of Protein Complexes with SAXS Profiles
Schneidman-Duhovny, Dina; Hammel, Michal
2018-01-01
Small-angle X-ray scattering (SAXS) is an increasingly common and useful technique for structural characterization of molecules in solution. A SAXS experiment determines the scattering intensity of a molecule as a function of spatial frequency, termed SAXS profile. SAXS profiles can be utilized in a variety of molecular modeling applications, such as comparing solution and crystal structures, structural characterization of flexible proteins, assembly of multi-protein complexes, and modeling of missing regions in the high-resolution structure. Here, we describe protocols for modeling atomic structures based on SAXS profiles. The first protocol is for comparing solution and crystal structures including modeling of missing regions and determination of the oligomeric state. The second protocol performs multi-state modeling by finding a set of conformations and their weights that fit the SAXS profile starting from a single-input structure. The third protocol is for protein-protein docking based on the SAXS profile of the complex. We describe the underlying software, followed by demonstrating their application on interleukin 33 (IL33) with its primary receptor ST2 and DNA ligase IV-XRCC4 complex. PMID:29605933
Dynamical Behaviors in Complex-Valued Love Model With or Without Time Delays
NASA Astrophysics Data System (ADS)
Deng, Wei; Liao, Xiaofeng; Dong, Tao
2017-12-01
In this paper, a novel version of nonlinear model, i.e. a complex-valued love model with two time delays between two individuals in a love affair, has been proposed. A notable feature in this model is that we separate the emotion of one individual into real and imaginary parts to represent the variation and complexity of psychophysiological emotion in romantic relationship instead of just real domain, and make our model much closer to reality. This is because love is a complicated cognitive and social phenomenon, full of complexity, diversity and unpredictability, which refers to the coexistence of different aspects of feelings, states and attitudes ranging from joy and trust to sadness and disgust. By analyzing associated characteristic equation of linearized equations for our model, it is found that the Hopf bifurcation occurs when the sum of time delays passes through a sequence of critical value. Stability of bifurcating cyclic love dynamics is also derived by applying the normal form theory and the center manifold theorem. In addition, it is also shown that, for some appropriate chosen parameters, chaotic behaviors can appear even without time delay.
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
An Efficient Model-based Diagnosis Engine for Hybrid Systems Using Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Narasimhan, Sriram; Roychoudhury, Indranil; Daigle, Matthew; Pulido, Belarmino
2013-01-01
Complex hybrid systems are present in a large range of engineering applications, like mechanical systems, electrical circuits, or embedded computation systems. The behavior of these systems is made up of continuous and discrete event dynamics that increase the difficulties for accurate and timely online fault diagnosis. The Hybrid Diagnosis Engine (HyDE) offers flexibility to the diagnosis application designer to choose the modeling paradigm and the reasoning algorithms. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. However, HyDE faces some problems regarding performance in terms of complexity and time. Our focus in this paper is on developing efficient model-based methodologies for online fault diagnosis in complex hybrid systems. To do this, we propose a diagnosis framework where structural model decomposition is integrated within the HyDE diagnosis framework to reduce the computational complexity associated with the fault diagnosis of hybrid systems. As a case study, we apply our approach to a diagnostic testbed, the Advanced Diagnostics and Prognostics Testbed (ADAPT), using real data.
Formative feedback and scaffolding for developing complex problem solving and modelling outcomes
NASA Astrophysics Data System (ADS)
Frank, Brian; Simper, Natalie; Kaupp, James
2018-07-01
This paper discusses the use and impact of formative feedback and scaffolding to develop outcomes for complex problem solving in a required first-year course in engineering design and practice at a medium-sized research-intensive Canadian university. In 2010, the course began to use team-based, complex, open-ended contextualised problems to develop problem solving, communications, teamwork, modelling, and professional skills. Since then, formative feedback has been incorporated into: task and process-level feedback on scaffolded tasks in-class, formative assignments, and post-assignment review. Development in complex problem solving and modelling has been assessed through analysis of responses from student surveys, direct criterion-referenced assessment of course outcomes from 2013 to 2015, and an external longitudinal study. The findings suggest that students are improving in outcomes related to complex problem solving over the duration of the course. Most notably, the addition of new feedback and scaffolding coincided with improved student performance.
Rise and fall of political complexity in island South-East Asia and the Pacific.
Currie, Thomas E; Greenhill, Simon J; Gray, Russell D; Hasegawa, Toshikazu; Mace, Ruth
2010-10-14
There is disagreement about whether human political evolution has proceeded through a sequence of incremental increases in complexity, or whether larger, non-sequential increases have occurred. The extent to which societies have decreased in complexity is also unclear. These debates have continued largely in the absence of rigorous, quantitative tests. We evaluated six competing models of political evolution in Austronesian-speaking societies using phylogenetic methods. Here we show that in the best-fitting model political complexity rises and falls in a sequence of small steps. This is closely followed by another model in which increases are sequential but decreases can be either sequential or in bigger drops. The results indicate that large, non-sequential jumps in political complexity have not occurred during the evolutionary history of these societies. This suggests that, despite the numerous contingent pathways of human history, there are regularities in cultural evolution that can be detected using computational phylogenetic methods.
Experimental and modeling study of the uranium (VI) sorption on goethite.
Missana, Tiziana; García-Gutiérrez, Miguel; Maffiotte, Cesar
2003-04-15
Acicular goethite was synthesized in the laboratory and its main physicochemical properties (composition, microstructure, surface area, and surface charge) were analyzed as a previous step to sorption experiments. The stability of the oxide, under the conditions used in sorption studies, was also investigated. The sorption of U(VI) onto goethite was studied under O(2)- and CO(2)-free atmosphere and in a wide range of experimental conditions (pH, ionic strength, radionuclide, and solid concentration), in order to assess the validity of different surface complexation models available for the interpretation of sorption data. Three different models were used to fit the experimental data. The first two models were based on the diffuse double layer concept. The first one (Model 1) considered two different monodentate complexes with the goethite surface and the second (Model 2) a single binuclear bidentate complex. A nonelectrostatic (NE) approach was used as a third model and, in that case, the same species considered in Model 1 were used. The results showed that all the models are able to describe the sorption behavior fairly well as a function of pH, electrolyte concentration, and U(VI) concentration. However, Model 2 fails in the description of the uranium sorption behavior as a function of the sorbent concentration. This demonstrates the importance of checking the validity of any surface complexation model under the widest possible range of experimental conditions.
Complex Dynamics in Nonequilibrium Economics and Chemistry
NASA Astrophysics Data System (ADS)
Wen, Kehong
Complex dynamics provides a new approach in dealing with economic complexity. We study interactively the empirical and theoretical aspects of business cycles. The way of exploring complexity is similar to that in the study of an oscillatory chemical system (BZ system)--a model for modeling complex behavior. We contribute in simulating qualitatively the complex periodic patterns observed from the controlled BZ experiments to narrow the gap between modeling and experiment. The gap between theory and reality is much wider in economics, which involves studies of human expectations and decisions, the essential difference from natural sciences. Our empirical and theoretical studies make substantial progress in closing this gap. With the help from the new development in nonequilibrium physics, i.e., the complex spectral theory, we advance our technique in detecting characteristic time scales from empirical economic data. We obtain correlation resonances, which give oscillating modes with decays for correlation decomposition, from different time series including S&P 500, M2, crude oil spot prices, and GNP. The time scales found are strikingly compatible with business experiences and other studies in business cycles. They reveal the non-Markovian nature of coherent markets. The resonances enhance the evidence of economic chaos obtained by using other tests. The evolving multi-humped distributions produced by the moving-time -window technique reveal the nonequilibrium nature of economic behavior. They reproduce the American economic history of booms and busts. The studies seem to provide a way out of the debate on chaos versus noise and unify the cyclical and stochastic approaches in explaining business fluctuations. Based on these findings and new expectation formulation, we construct a business cycle model which gives qualitatively compatible patterns to those found empirically. The soft-bouncing oscillator model provides a better alternative than the harmonic oscillator or the random walk model as the building block in business cycle theory. The mathematical structure of the model (delay differential equation) is studied analytically and numerically. The research pave the way toward sensible economic forecasting.
Modeling OPC complexity for design for manufacturability
NASA Astrophysics Data System (ADS)
Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong
2005-11-01
Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data from OPC and mask data preparation runs, we build models of FC as function of OPC tolerances and layout parameters.
The Effect of Task Complexity on the Quality of EFL Learners' Argumentative Writing
ERIC Educational Resources Information Center
Sadeghi, Karim; Mosalli, Zahra
2013-01-01
Based on Robinson's (2005) Cognition Hypothesis and Skehan and Foster's (2001) Limited Attentional Capacity Model, the current study attempted to investigate the effect of manipulating task complexity on argumentative writing quality in terms of lexical complexity, fluency, grammatical accuracy, and syntactic complexity. Task complexity was…
Revisiting the structures of several antibiotics bound to the bacterial ribosome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bulkley, David; Innis, C. Axel; Blaha, Gregor
2010-10-08
The increasing prevalence of antibiotic-resistant pathogens reinforces the need for structures of antibiotic-ribosome complexes that are accurate enough to enable the rational design of novel ribosome-targeting therapeutics. Structures of many antibiotics in complex with both archaeal and eubacterial ribosomes have been determined, yet discrepancies between several of these models have raised the question of whether these differences arise from species-specific variations or from experimental problems. Our structure of chloramphenicol in complex with the 70S ribosome from Thermus thermophilus suggests a model for chloramphenicol bound to the large subunit of the bacterial ribosome that is radically different from the prevailing model.more » Further, our structures of the macrolide antibiotics erythromycin and azithromycin in complex with a bacterial ribosome are indistinguishable from those determined of complexes with the 50S subunit of Haloarcula marismortui, but differ significantly from the models that have been published for 50S subunit complexes of the eubacterium Deinococcus radiodurans. Our structure of the antibiotic telithromycin bound to the T. thermophilus ribosome reveals a lactone ring with a conformation similar to that observed in the H. marismortui and D. radiodurans complexes. However, the alkyl-aryl moiety is oriented differently in all three organisms, and the contacts observed with the T. thermophilus ribosome are consistent with biochemical studies performed on the Escherichia coli ribosome. Thus, our results support a mode of macrolide binding that is largely conserved across species, suggesting that the quality and interpretation of electron density, rather than species specificity, may be responsible for many of the discrepancies between the models.« less
Revisiting the Structures of Several Antibiotics Bound to the Bacterial Ribosome
DOE Office of Scientific and Technical Information (OSTI.GOV)
D Bulkley; C Innis; G Blaha
2011-12-31
The increasing prevalence of antibiotic-resistant pathogens reinforces the need for structures of antibiotic-ribosome complexes that are accurate enough to enable the rational design of novel ribosome-targeting therapeutics. Structures of many antibiotics in complex with both archaeal and eubacterial ribosomes have been determined, yet discrepancies between several of these models have raised the question of whether these differences arise from species-specific variations or from experimental problems. Our structure of chloramphenicol in complex with the 70S ribosome from Thermus thermophilus suggests a model for chloramphenicol bound to the large subunit of the bacterial ribosome that is radically different from the prevailing model.more » Further, our structures of the macrolide antibiotics erythromycin and azithromycin in complex with a bacterial ribosome are indistinguishable from those determined of complexes with the 50S subunit of Haloarcula marismortui, but differ significantly from the models that have been published for 50S subunit complexes of the eubacterium Deinococcus radiodurans. Our structure of the antibiotic telithromycin bound to the T. thermophilus ribosome reveals a lactone ring with a conformation similar to that observed in the H. marismortui and D. radiodurans complexes. However, the alkyl-aryl moiety is oriented differently in all three organisms, and the contacts observed with the T. thermophilus ribosome are consistent with biochemical studies performed on the Escherichia coli ribosome. Thus, our results support a mode of macrolide binding that is largely conserved across species, suggesting that the quality and interpretation of electron density, rather than species specificity, may be responsible for many of the discrepancies between the models.« less
Hu, Jin; Wang, Jun
2015-06-01
In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.
Liotta, Flavia; d'Antonio, Giuseppe; Esposito, Giovanni; Fabbricino, Massimiliano; Frunzo, Luigi; van Hullebusch, Eric D; Lens, Piet N L; Pirozzi, Francesco
2014-01-01
The role of the moisture content and particle size (PS) on the disintegration of complex organic matter during the wet anaerobic digestion (AD) process was investigated. A range of total solids (TS) from 5% to 11.3% and PS from 0.25 to 15 mm was evaluated using carrot waste as model complex organic matter. The experimental results showed that the methane production rate decreased with higher TS and PS. A modified version of the AD model no.1 for complex organic substrates was used to model the experimental data. The simulations showed a decrease of the disintegration rate constants with increasing TS and PS. The results of the biomethanation tests were used to calibrate and validate the applied model. In particular, the values of the disintegration constant for various TS and PS were determined. The simulations showed good agreement between the numerical and observed data.
Modelling and Order of Acoustic Transfer Functions Due to Reflections from Augmented Objects
NASA Astrophysics Data System (ADS)
Kuster, Martin; de Vries, Diemer
2006-12-01
It is commonly accepted that the sound reflections from real physical objects are much more complicated than what usually is and can be modelled by room acoustics modelling software. The main reason for this limitation is the level of detail inherent in the physical object in terms of its geometrical and acoustic properties. In the present paper, the complexity of the sound reflections from a corridor wall is investigated by modelling the corresponding acoustic transfer functions at several receiver positions in front of the wall. The complexity for different wall configurations has been examined and the changes have been achieved by altering its acoustic image. The results show that for a homogenous flat wall, the complexity is significant and for a wall including various smaller objects, the complexity is highly dependent on the position of the receiver with respect to the objects.
Sterner, Eric; Masuko, Sayaka; Li, Guoyun; Li, Lingyun; Green, Dixy E.; Otto, Nigel J.; Xu, Yongmei; DeAngelis, Paul L.; Liu, Jian; Dordick, Jonathan S.; Linhardt, Robert J.
2014-01-01
Four well-defined heparan sulfate (HS) block copolymers containing S-domains (high sulfo group content) placed adjacent to N-domains (low sulfo group content) were chemoenzymatically synthesized and characterized. The domain lengths in these HS block co-polymers were ∼40 saccharide units. Microtiter 96-well and three-dimensional cell-based microarray assays utilizing murine immortalized bone marrow (BaF3) cells were developed to evaluate the activity of these HS block co-polymers. Each recombinant BaF3 cell line expresses only a single type of fibroblast growth factor receptor (FGFR) but produces neither HS nor fibroblast growth factors (FGFs). In the presence of different FGFs, BaF3 cell proliferation showed clear differences for the four HS block co-polymers examined. These data were used to examine the two proposed signaling models, the symmetric FGF2-HS2-FGFR2 ternary complex model and the asymmetric FGF2-HS1-FGFR2 ternary complex model. In the symmetric FGF2-HS2-FGFR2 model, two acidic HS chains bind in a basic canyon located on the top face of the FGF2-FGFR2 protein complex. In this model the S-domains at the non-reducing ends of the two HS proteoglycan chains are proposed to interact with the FGF2-FGFR2 protein complex. In contrast, in the asymmetric FGF2-HS1-FGFR2 model, a single HS chain interacts with the FGF2-FGFR2 protein complex through a single S-domain that can be located at any position within an HS chain. Our data comparing a series of synthetically prepared HS block copolymers support a preference for the symmetric FGF2-HS2-FGFR2 ternary complex model. PMID:24563485
Mickiewicz, Agnieszka; Sarzyńska, Joanna; Miłostan, Maciej; Kurzyńska-Kokorniak, Anna; Rybarczyk, Agnieszka; Łukasiak, Piotr; Kuliński, Tadeusz; Figlerowicz, Marek; Błażewicz, Jacek
2017-02-01
Plant Dicer-like proteins (DCLs) belong to the Ribonuclease III (RNase III) enzyme family. They are involved in the regulation of gene expression and antiviral defense through RNA interference pathways. A model plant, Arabidopsis thaliana encodes four DCL proteins (AtDCL1-4) that produce different classes of small regulatory RNAs. Our studies focus on AtDCL4 that processes double-stranded RNAs (dsRNAs) into 21 nucleotide trans-acting small interfering RNAs. So far, little is known about the structures of plant DCLs and the complexes they form with dsRNA. In this work, we present models of the catalytic core of AtDCL4 and AtDCL4-dsRNA complex constructed by computational methods. We built a homology model of the catalytic core of AtDCL4 comprising Platform, PAZ, Connector helix and two RNase III domains. To assemble the AtDCL4-dsRNA complex two modeling approaches were used. In the first method, to establish conformations that allow building a consistent model of the complex, we used Normal Mode Analysis for both dsRNA and AtDCL4. The second strategy involved template-based approach for positioning of the PAZ domain and manual arrangement of the Connector helix. Our results suggest that the spatial orientation of the Connector helix, Platform and PAZ relative to the RNase III domains is crucial for measuring dsRNA of defined length. The modeled complexes provide information about interactions that may contribute to the relative orientations of these domains and to dsRNA binding. All these information can be helpful for understanding the mechanism of AtDCL4-mediated dsRNA recognition and binding, to produce small RNA of specific size. Copyright © 2016 Elsevier Ltd. All rights reserved.
In Silico Analysis for the Study of Botulinum Toxin Structure
NASA Astrophysics Data System (ADS)
Suzuki, Tomonori; Miyazaki, Satoru
2010-01-01
Protein-protein interactions play many important roles in biological function. Knowledge of protein-protein complex structure is required for understanding the function. The determination of protein-protein complex structure by experimental studies remains difficult, therefore computational prediction of protein structures by structure modeling and docking studies is valuable method. In addition, MD simulation is also one of the most popular methods for protein structure modeling and characteristics. Here, we attempt to predict protein-protein complex structure and property using some of bioinformatic methods, and we focus botulinum toxin complex as target structure.
A density-based clustering model for community detection in complex networks
NASA Astrophysics Data System (ADS)
Zhao, Xiang; Li, Yantao; Qu, Zehui
2018-04-01
Network clustering (or graph partitioning) is an important technique for uncovering the underlying community structures in complex networks, which has been widely applied in various fields including astronomy, bioinformatics, sociology, and bibliometric. In this paper, we propose a density-based clustering model for community detection in complex networks (DCCN). The key idea is to find group centers with a higher density than their neighbors and a relatively large integrated-distance from nodes with higher density. The experimental results indicate that our approach is efficient and effective for community detection of complex networks.
NASA Astrophysics Data System (ADS)
Guo, L.; Yin, Y.; Deng, M.; Guo, L.; Yan, J.
2017-12-01
At present, most magnetotelluric (MT) forward modelling and inversion codes are based on finite difference method. But its structured mesh gridding cannot be well adapted for the conditions with arbitrary topography or complex tectonic structures. By contrast, the finite element method is more accurate in calculating complex and irregular 3-D region and has lower requirement of function smoothness. However, the complexity of mesh gridding and limitation of computer capacity has been affecting its application. COMSOL Multiphysics is a cross-platform finite element analysis, solver and multiphysics full-coupling simulation software. It achieves highly accurate numerical simulations with high computational performance and outstanding multi-field bi-directional coupling analysis capability. In addition, its AC/DC and RF module can be used to easily calculate the electromagnetic responses of complex geological structures. Using the adaptive unstructured grid, the calculation is much faster. In order to improve the discretization technique of computing area, we use the combination of Matlab and COMSOL Multiphysics to establish a general procedure for calculating the MT responses for arbitrary resistivity models. The calculated responses include the surface electric and magnetic field components, impedance components, magnetic transfer functions and phase tensors. Then, the reliability of this procedure is certificated by 1-D, 2-D and 3-D and anisotropic forward modeling tests. Finally, we establish the 3-D lithospheric resistivity model for the Proterozoic Wutai-Hengshan Mts. within the North China Craton by fitting the real MT data collected there. The reliability of the model is also verified by induced vectors and phase tensors. Our model shows more details and better resolution, compared with the previously published 3-D model based on the finite difference method. In conclusion, COMSOL Multiphysics package is suitable for modeling the 3-D lithospheric resistivity structures under complex tectonic deformation backgrounds, which could be a good complement to the existing finite-difference inversion algorithms.
NASA Astrophysics Data System (ADS)
Watson, Brett; Yeo, Leslie; Friend, James
2010-06-01
Making use of mechanical resonance has many benefits for the design of microscale devices. A key to successfully incorporating this phenomenon in the design of a device is to understand how the resonant frequencies of interest are affected by changes to the geometric parameters of the design. For simple geometric shapes, this is quite easy, but for complex nonlinear designs, it becomes significantly more complex. In this paper, two novel modeling techniques are demonstrated to extract the axial and torsional resonant frequencies of a complex nonlinear geometry. The first decomposes the complex geometry into easy to model components, while the second uses scaling techniques combined with the finite element method. Both models overcome problems associated with using current analytical methods as design tools, and enable a full investigation of how changes in the geometric parameters affect the resonant frequencies of interest. The benefit of such models is then demonstrated through their use in the design of a prototype piezoelectric ultrasonic resonant micromotor which has improved performance characteristics over previous prototypes.
NASA Astrophysics Data System (ADS)
Liu, Y.; Gupta, H.; Wagener, T.; Stewart, S.; Mahmoud, M.; Hartmann, H.; Springer, E.
2007-12-01
Some of the most challenging issues facing contemporary water resources management are those typified by complex coupled human-environmental systems with poorly characterized uncertainties. In other words, major decisions regarding water resources have to be made in the face of substantial uncertainty and complexity. It has been suggested that integrated models can be used to coherently assemble information from a broad set of domains, and can therefore serve as an effective means for tackling the complexity of environmental systems. Further, well-conceived scenarios can effectively inform decision making, particularly when high complexity and poorly characterized uncertainties make the problem intractable via traditional uncertainty analysis methods. This presentation discusses the integrated modeling framework adopted by SAHRA, an NSF Science & Technology Center, to investigate stakeholder-driven water sustainability issues within the semi-arid southwestern US. The multi-disciplinary, multi-resolution modeling framework incorporates a formal scenario approach to analyze the impacts of plausible (albeit uncertain) alternative futures to support adaptive management of water resources systems. Some of the major challenges involved in, and lessons learned from, this effort will be discussed.
Olejník, Peter; Nosal, Matej; Havran, Tomas; Furdova, Adriana; Cizmar, Maros; Slabej, Michal; Thurzo, Andrej; Vitovic, Pavol; Klvac, Martin; Acel, Tibor; Masura, Jozef
2017-01-01
To evaluate the accuracy of the three-dimensional (3D) printing of cardiovascular structures. To explore whether utilisation of 3D printed heart replicas can improve surgical and catheter interventional planning in patients with complex congenital heart defects. Between December 2014 and November 2015 we fabricated eight cardiovascular models based on computed tomography data in patients with complex spatial anatomical relationships of cardiovascular structures. A Bland-Altman analysis was used to assess the accuracy of 3D printing by comparing dimension measurements at analogous anatomical locations between the printed models and digital imagery data, as well as between printed models and in vivo surgical findings. The contribution of 3D printed heart models for perioperative planning improvement was evaluated in the four most representative patients. Bland-Altman analysis confirmed the high accuracy of 3D cardiovascular printing. Each printed model offered an improved spatial anatomical orientation of cardiovascular structures. Current 3D printers can produce authentic copies of patients` cardiovascular systems from computed tomography data. The use of 3D printed models can facilitate surgical or catheter interventional procedures in patients with complex congenital heart defects due to better preoperative planning and intraoperative orientation.
Partitioning degrees of freedom in hierarchical and other richly-parameterized models.
Cui, Yue; Hodges, James S; Kong, Xiaoxiao; Carlin, Bradley P
2010-02-01
Hodges & Sargent (2001) developed a measure of a hierarchical model's complexity, degrees of freedom (DF), that is consistent with definitions for scatterplot smoothers, interpretable in terms of simple models, and that enables control of a fit's complexity by means of a prior distribution on complexity. DF describes complexity of the whole fitted model but in general it is unclear how to allocate DF to individual effects. We give a new definition of DF for arbitrary normal-error linear hierarchical models, consistent with Hodges & Sargent's, that naturally partitions the n observations into DF for individual effects and for error. The new conception of an effect's DF is the ratio of the effect's modeled variance matrix to the total variance matrix. This gives a way to describe the sizes of different parts of a model (e.g., spatial clustering vs. heterogeneity), to place DF-based priors on smoothing parameters, and to describe how a smoothed effect competes with other effects. It also avoids difficulties with the most common definition of DF for residuals. We conclude by comparing DF to the effective number of parameters p(D) of Spiegelhalter et al (2002). Technical appendices and a dataset are available online as supplemental materials.
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.
Wu, Zhi-fang; Lei, Yong-hua; Li, Wen-jie; Liao, Sheng-hui; Zhao, Zi-jin
2013-02-01
To explore an effective method to construct and validate a finite element model of the unilateral cleft lip and palate(UCLP) craniomaxillary complex with sutures, which could be applied in further three-dimensional finite element analysis (FEA). One male patient aged 9 with left complete lip and palate cleft was selected and CT scan was taken at 0.75mm intervals on the skull. The CT data was saved in Dicom format, which was, afterwards, imported into Software Mimics 10.0 to generate a three-dimensional anatomic model. Then Software Geomagic Studio 12.0 was used to match, smoothen and transfer the anatomic model into a CAD model with NURBS patches. Then, 12 circum-maxillary sutures were integrated into the CAD model by Solidworks (2011 version). Finally meshing by E-feature Biomedical Modeler was done and a three-dimensional finite element model with sutures was obtained. A maxillary protraction force (500 g per side, 20° downward and forward from the occlusal plane) was applied. Displacement and stress distribution of some important craniofacial structures were measured and compared with the results of related researches in the literature. A three-dimensional finite element model of UCLP craniomaxillary complex with 12 sutures was established from the CT scan data. This simulation model consisted of 206 753 individual elements with 260 662 nodes, which was a more precise simulation and a better representation of human craniomaxillary complex than the formerly available FEA models. By comparison, this model was proved to be valid. It is an effective way to establish the three-dimensional finite element model of UCLP cranio-maxillary complex with sutures from CT images with the help of the following softwares: Mimics 10.0, Geomagic Studio 12.0, Solidworks and E-feature Biomedical Modeler.
NASA Astrophysics Data System (ADS)
Nikolaeva, L. S.; Semenov, A. N.
2018-02-01
The anticoagulant activity of high-molecular-weight heparin is increased by developing a new highly active heparin complex with glutamate using the thermodynamic model of chemical equilibria based on pH-metric data. The anticoagulant activity of the developed complexes is estimated in the pH range of blood plasma according to the drop in the calculated equilibrium Ca2+ concentration associated with the formation of mixed ligand complexes of Ca2+ ions, heparin (Na4hep), and glutamate (H2Glu). A thermodynamic model is calculated by mathematically modelling chemical equilibria in the CaCl2-Na4hep-H2Glu-H2O-NaCl system in the pH range of 2.30 ≤ pH ≤ 10.50 in diluted saline that acts as a background electrolyte (0.154 M NaCl) at 37°C and initial concentrations of the main components of ν × 10-3 M, where n ≤ 4. The thermodynamic model is used to determine the main complex of the monomeric unit of heparin with glutamate (HhepGlu5-) and the most stable mixed ligand complex of Ca2+ with heparin and glutamate (Ca2hepGlu2-) in the pH range of blood plasma (6.80 ≤ pH ≤ 7.40). It is concluded that the Ca2hepGlu2- complex reduces the Ca2+ concentration 107 times more than the Ca2+ complex with pure heparin. The anticoagulant effect of the developed HhepGlu5- complex is confirmed in vitro and in vivo via coagulation tests on the blood plasma of laboratory rats. Additional antithrombotic properties of the developed complex are identified. The new highly active anticoagulant, HhepGlu5- complex with additional antithrombotic properties, is patented.
Street, Nichola; Forsythe, Alexandra M; Reilly, Ronan; Taylor, Richard; Helmy, Mai S
2016-01-01
Fractal patterns offer one way to represent the rough complexity of the natural world. Whilst they dominate many of our visual experiences in nature, little large-scale perceptual research has been done to explore how we respond aesthetically to these patterns. Previous research (Taylor et al., 2011) suggests that the fractal patterns with mid-range fractal dimensions (FDs) have universal aesthetic appeal. Perceptual and aesthetic responses to visual complexity have been more varied with findings suggesting both linear (Forsythe et al., 2011) and curvilinear (Berlyne, 1970) relationships. Individual differences have been found to account for many of the differences we see in aesthetic responses but some, such as culture, have received little attention within the fractal and complexity research fields. This two-study article aims to test preference responses to FD and visual complexity, using a large cohort (N = 443) of participants from around the world to allow universality claims to be tested. It explores the extent to which age, culture and gender can predict our preferences for fractally complex patterns. Following exploratory analysis that found strong correlations between FD and visual complexity, a series of linear mixed-effect models were implemented to explore if each of the individual variables could predict preference. The first tested a linear complexity model (likelihood of selecting the more complex image from the pair of images) and the second a mid-range FD model (likelihood of selecting an image within mid-range). Results show that individual differences can reliably predict preferences for complexity across culture, gender and age. However, in fitting with current findings the mid-range models show greater consistency in preference not mediated by gender, age or culture. This article supports the established theory that the mid-range fractal patterns appear to be a universal construct underlying preference but also highlights the fragility of universal claims by demonstrating individual differences in preference for the interrelated concept of visual complexity. This highlights a current stalemate in the field of empirical aesthetics.
NASA Technical Reports Server (NTRS)
Jones, Lisa E. (Technical Monitor); Stockwell, Alan E.
2005-01-01
LS-DYNA simulations were conducted to study the influence of model complexity on the response of a typical Reinforced Carbon-Carbon (RCC) panel to a foam impact at a location approximately midway between the ribs. A structural model comprised of Panels 10, 11, and TSeal 11 was chosen as the baseline model for the study. A simulation was conducted with foam striking Panel 10 at Location 4 at an alpha angle of 10 degrees, with an impact velocity of 1000 ft/sec. A second simulation was conducted after removing Panel 11 from the model, and a third simulation was conducted after removing both Panel 11 and T-Seal 11. All three simulations showed approximately the same response for Panel 10, and the simplified simulation model containing only Panel 10 was shown to be significantly less expensive to execute than the other two more complex models.
Metrics for Business Process Models
NASA Astrophysics Data System (ADS)
Mendling, Jan
Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.
3D Bioprinting of Tissue/Organ Models.
Pati, Falguni; Gantelius, Jesper; Svahn, Helene Andersson
2016-04-04
In vitro tissue/organ models are useful platforms that can facilitate systematic, repetitive, and quantitative investigations of drugs/chemicals. The primary objective when developing tissue/organ models is to reproduce physiologically relevant functions that typically require complex culture systems. Bioprinting offers exciting prospects for constructing 3D tissue/organ models, as it enables the reproducible, automated production of complex living tissues. Bioprinted tissues/organs may prove useful for screening novel compounds or predicting toxicity, as the spatial and chemical complexity inherent to native tissues/organs can be recreated. In this Review, we highlight the importance of developing 3D in vitro tissue/organ models by 3D bioprinting techniques, characterization of these models for evaluating their resemblance to native tissue, and their application in the prioritization of lead candidates, toxicity testing, and as disease/tumor models. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.
Baars, Erik W; van der Hart, Onno; Nijenhuis, Ellert R S; Chu, James A; Glas, Gerrit; Draijer, Nel
2011-01-01
The purpose of this study was to develop an expertise-based prognostic model for the treatment of complex posttraumatic stress disorder (PTSD) and dissociative identity disorder (DID). We developed a survey in 2 rounds: In the first round we surveyed 42 experienced therapists (22 DID and 20 complex PTSD therapists), and in the second round we surveyed a subset of 22 of the 42 therapists (13 DID and 9 complex PTSD therapists). First, we drew on therapists' knowledge of prognostic factors for stabilization-oriented treatment of complex PTSD and DID. Second, therapists prioritized a list of prognostic factors by estimating the size of each variable's prognostic effect; we clustered these factors according to content and named the clusters. Next, concept mapping methodology and statistical analyses (including principal components analyses) were used to transform individual judgments into weighted group judgments for clusters of items. A prognostic model, based on consensually determined estimates of effect sizes, of 8 clusters containing 51 factors for both complex PTSD and DID was formed. It includes the clusters lack of motivation, lack of healthy relationships, lack of healthy therapeutic relationships, lack of other internal and external resources, serious Axis I comorbidity, serious Axis II comorbidity, poor attachment, and self-destruction. In addition, a set of 5 DID-specific items was constructed. The model is supportive of the current phase-oriented treatment model, emphasizing the strengthening of the therapeutic relationship and the patient's resources in the initial stabilization phase. Further research is needed to test the model's statistical and clinical validity.
DOT National Transportation Integrated Search
2013-01-01
The ability to model and understand the complex dynamics of intelligent agents as they interact within a transportation system could lead to revolutionary advances in transportation engineering and intermodal surface transportation in the United Stat...
In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...
Kim, Yong Sun; Choi, Hyeong Ho; Cho, Young Nam; Park, Yong Jae; Lee, Jong B; Yang, King H; King, Albert I
2005-11-01
Although biomechanical studies on the knee-thigh-hip (KTH) complex have been extensive, interactions between the KTH and various vehicular interior design parameters in frontal automotive crashes for newer models have not been reported in the open literature to the best of our knowledge. A 3D finite element (FE) model of a 50(th) percentile male KTH complex, which includes explicit representations of the iliac wing, acetabulum, pubic rami, sacrum, articular cartilage, femoral head, femoral neck, femoral condyles, patella, and patella tendon, has been developed to simulate injuries such as fracture of the patella, femoral neck, acetabulum, and pubic rami of the KTH complex. Model results compared favorably against regional component test data including a three-point bending test of the femur, axial loading of the isolated knee-patella, axial loading of the KTH complex, axial loading of the femoral head, and lateral loading of the isolated pelvis. The model was further integrated into a Wayne State University upper torso model and validated against data obtained from whole body sled tests. The model was validated against these experimental data over a range of impact speeds, impactor masses and boundary conditions. Using Design Of Experiment (DOE) methods based on Taguchi's approach and the developed FE model of the whole body, including the KTH complex, eight vehicular interior design parameters, namely the load limiter force, seat belt elongation, pretensioner inlet amount, knee-knee bolster distance, knee bolster angle, knee bolster stiffness, toe board angle and impact speed, each with either two or three design levels, were simulated to predict their respective effects on the potential of KTH injury in frontal impacts. Simulation results proposed best design levels for vehicular interior design parameters to reduce the injury potential of the KTH complex due to frontal automotive crashes. This study is limited by the fact that prediction of bony fracture was based on an element elimination method available in the LS-DYNA code. No validation study was conducted to determine if this method is suitable when simulating fractures of biological tissues. More work is still needed to further validate the FE model of the KTH complex to increase its reliability in the assessment of various impact loading conditions associated with vehicular crash scenarios.
STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python
Wils, Stefan; Schutter, Erik De
2008-01-01
We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code. PMID:19623245
2012-05-21
CAPE CANAVERAL, Fla. – A barge arrives at NASA Kennedy Space Center’s Launch Complex 39 turn basin in Florida. The high-fidelity space shuttle model is being transported from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will be transported via barge to Texas. The model was built in Apopka, Fla., by Guard-Lee and installed at the Kennedy Space Center Visitor Complex in 1993.The model has been parked at the turn basin the past five months to allow the Kennedy Space Center Visitor Complex to begin building a new facility to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Frankie Martin
2012-05-21
CAPE CANAVERAL, Fla. – A barge arrives at NASA Kennedy Space Center’s Launch Complex 39 turn basin in Florida. The high-fidelity space shuttle model is being transported from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will be transported via barge to Texas. The model was built in Apopka, Fla., by Guard-Lee and installed at the Kennedy Space Center Visitor Complex in 1993.The model has been parked at the turn basin the past five months to allow the Kennedy Space Center Visitor Complex to begin building a new facility to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Frankie Martin
Complex Sequencing Rules of Birdsong Can be Explained by Simple Hidden Markov Processes
Katahira, Kentaro; Suzuki, Kenta; Okanoya, Kazuo; Okada, Masato
2011-01-01
Complex sequencing rules observed in birdsongs provide an opportunity to investigate the neural mechanism for generating complex sequential behaviors. To relate the findings from studying birdsongs to other sequential behaviors such as human speech and musical performance, it is crucial to characterize the statistical properties of the sequencing rules in birdsongs. However, the properties of the sequencing rules in birdsongs have not yet been fully addressed. In this study, we investigate the statistical properties of the complex birdsong of the Bengalese finch (Lonchura striata var. domestica). Based on manual-annotated syllable labeles, we first show that there are significant higher-order context dependencies in Bengalese finch songs, that is, which syllable appears next depends on more than one previous syllable. We then analyze acoustic features of the song and show that higher-order context dependencies can be explained using first-order hidden state transition dynamics with redundant hidden states. This model corresponds to hidden Markov models (HMMs), well known statistical models with a large range of application for time series modeling. The song annotation with these models with first-order hidden state dynamics agreed well with manual annotation, the score was comparable to that of a second-order HMM, and surpassed the zeroth-order model (the Gaussian mixture model; GMM), which does not use context information. Our results imply that the hierarchical representation with hidden state dynamics may underlie the neural implementation for generating complex behavioral sequences with higher-order dependencies. PMID:21915345
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Mora K.; Hiemstra, T; Machesky, Michael L.
2012-01-01
The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3 11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Sternmore » layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (110) rutile surface (Zhang et al., 2004b). TheMDsimulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models (SCMs) should aid in elucidating a fundamental understating of ion-adsorption reactions.« less
NASA Astrophysics Data System (ADS)
Ridley, Moira K.; Hiemstra, Tjisse; Machesky, Michael L.; Wesolowski, David J.; van Riemsdijk, Willem H.
2012-10-01
The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3-11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 °C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Stern layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (1 1 0) rutile surface (Zhang et al., 2004b). The MD simulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models (SCMs) should aid in elucidating a fundamental understating of ion-adsorption reactions.
Evaluating and Mitigating the Impact of Complexity in Software Models
2015-12-01
Internal use:* Permission to reproduce this material and to prepare derivative works from this material for internal use is granted, provided the...introduction) provides our motivation to study complexity and the essential re- search questions that we address in this effort. Some background information... provides the reader with a basis for the work and related areas explored. Section 2 (The Impact of Complexity) discusses the impact of model-based
Kinetics and mechanism of olefin catalytic hydroalumination by organoaluminum compounds
NASA Astrophysics Data System (ADS)
Koledina, K. F.; Gubaidullin, I. M.
2016-05-01
The complex reaction mechanism of α-olefin catalytic hydroalumination by alkylalanes is investigated via mathematical modeling that involves plotting the kinetic models for the individual reactions that make up a complex system and a separate study of their principles. Kinetic parameters of olefin catalytic hydroalumination are estimated. Activation energies of the possible steps of the schemes of complex reaction mechanisms are compared and possible reaction pathways are determined.
Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network
NASA Astrophysics Data System (ADS)
Singh, U. K.; Tiwari, R. K.; Singh, S. B.
2010-02-01
The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.
Amador, Carolina; Urban, Matthew W; Chen, Shigao; Greenleaf, James F
2012-01-01
Elasticity imaging methods have been used to study tissue mechanical properties and have demonstrated that tissue elasticity changes with disease state. In current shear wave elasticity imaging methods typically only shear wave speed is measured and rheological models, e.g., Kelvin-Voigt, Maxwell and Standard Linear Solid, are used to solve for tissue mechanical properties such as the shear viscoelastic complex modulus. This paper presents a method to quantify viscoelastic material properties in a model-independent way by estimating the complex shear elastic modulus over a wide frequency range using time-dependent creep response induced by acoustic radiation force. This radiation force induced creep (RFIC) method uses a conversion formula that is the analytic solution of a constitutive equation. The proposed method in combination with Shearwave Dispersion Ultrasound Vibrometry (SDUV) is used to measure the complex modulus so that knowledge of the applied radiation force magnitude is not necessary. The conversion formula is shown to be sensitive to sampling frequency and the first reliable measure in time according to numerical simulations using the Kelvin-Voigt model creep strain and compliance. Representative model-free shear complex moduli from homogeneous tissue mimicking phantoms and one excised swine kidney were obtained. This work proposes a novel model-free ultrasound-based elasticity method that does not require a rheological model with associated fitting requirements. PMID:22345425
Amador, Carolina; Urban, Matthew W; Chen, Shigao; Greenleaf, James F
2012-03-07
Elasticity imaging methods have been used to study tissue mechanical properties and have demonstrated that tissue elasticity changes with disease state. In current shear wave elasticity imaging methods typically only shear wave speed is measured and rheological models, e.g. Kelvin-Voigt, Maxwell and Standard Linear Solid, are used to solve for tissue mechanical properties such as the shear viscoelastic complex modulus. This paper presents a method to quantify viscoelastic material properties in a model-independent way by estimating the complex shear elastic modulus over a wide frequency range using time-dependent creep response induced by acoustic radiation force. This radiation force induced creep method uses a conversion formula that is the analytic solution of a constitutive equation. The proposed method in combination with shearwave dispersion ultrasound vibrometry is used to measure the complex modulus so that knowledge of the applied radiation force magnitude is not necessary. The conversion formula is shown to be sensitive to sampling frequency and the first reliable measure in time according to numerical simulations using the Kelvin-Voigt model creep strain and compliance. Representative model-free shear complex moduli from homogeneous tissue mimicking phantoms and one excised swine kidney were obtained. This work proposes a novel model-free ultrasound-based elasticity method that does not require a rheological model with associated fitting requirements.
NASA Astrophysics Data System (ADS)
Ma, Yulong; Liu, Heping
2017-12-01
Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.
Teacher Stress: Complex Model Building with LISREL. Pedagogical Reports, No. 16.
ERIC Educational Resources Information Center
Tellenback, Sten
This paper presents a complex causal model of teacher stress based on data received from the responses of 1,466 teachers from Malmo, Sweden to a questionnaire. Also presented is a method for treating the model variables as higher-order factors or higher-order theoretical constructs. The paper's introduction presents a brief review of teacher…
Assessing Mediation in Dyadic Data Using the Actor-Partner Interdependence Model
ERIC Educational Resources Information Center
Ledermann, Thomas; Macho, Siegfried; Kenny, David A.
2011-01-01
The assessment of mediation in dyadic data is an important issue if researchers are to test process models. Using an extended version of the actor-partner interdependence model the estimation and testing of mediation is complex, especially when dyad members are distinguishable (e.g., heterosexual couples). We show how the complexity of the model…
ERIC Educational Resources Information Center
Haugwitz, Marion; Sandmann, Angela
2010-01-01
Understanding biological structures and functions is often difficult because of their complexity and micro-structure. For example, the vascular system is a complex and only partly visible system. Constructing models to better understand biological functions is seen as a suitable learning method. Models function as simplified versions of real…
Opinion: The use of natural hazard modeling for decision making under uncertainty
David E. Calkin; Mike Mentis
2015-01-01
Decision making to mitigate the effects of natural hazards is a complex undertaking fraught with uncertainty. Models to describe risks associated with natural hazards have proliferated in recent years. Concurrently, there is a growing body of work focused on developing best practices for natural hazard modeling and to create structured evaluation criteria for complex...
Mokarzel-Falcón, Leonardo; Padrón-García, Juan Alexander; Carrasco-Velar, Ramón; Berry, Colin; Montero-Cabrera, Luis A
2008-03-01
We propose two models of the human S-arrestin/rhodopsin complex in the inactive dark adapted rhodopsin and meta rhodopsin II form, obtained by homology modeling and knowledge based docking. First, a homology model for the human S-arrestin was built and validated by molecular dynamics, showing an average root mean square deviation difference from the pattern behavior of 0.76 A. Then, combining the human S-arrestin model and the modeled structure of the two human rhodopsin forms, we propose two models of interaction for the human S-arrestin/rhodopsin complex. The models involve two S-arrestin regions related to the N domain (residues 68-78; 170-182) and a third constituent of the C domain (248-253), with the rhodopsin C terminus (330-348). Of the 22 single point mutations related to retinitis pigmentosa and congenital night blindness located in the cytoplasmatic portion of rhodopsin or in S-arrestin, our models locate 16 in the interaction region and relate two others to possible dimer formation. Our calculations also predict that the light activated complex is more stable than the dark adapted rhodopsin and, therefore, of higher affinity to S-arrestin. 2008 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Haghnevis, Moeed
The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of engineered complex systems and predict their future adaptive patterns. The approach allows the examination of complexity in the structure and the behavior of components as a result of their connections and in relation to their environment. This research describes and uses the major differences of natural complex adaptive systems (CASs) with artificial/engineered CASs to build a framework and platform for ECAS. While this framework focuses on the critical factors of an engineered system, it also enables one to synthetically employ engineering and mathematical models to analyze and measure complexity in such systems. In this way concepts of complex systems science are adapted to management science and system of systems engineering. In particular an integrated consumer-based optimization and agent-based modeling (ABM) platform is presented that enables managers to predict and partially control patterns of behaviors in ECASs. Demonstrated on the U.S. electricity markets, ABM is integrated with normative and subjective decision behavior recommended by the U.S. Department of Energy (DOE) and Federal Energy Regulatory Commission (FERC). The approach integrates social networks, social science, complexity theory, and diffusion theory. Furthermore, it has unique and significant contribution in exploring and representing concrete managerial insights for ECASs and offering new optimized actions and modeling paradigms in agent-based simulation.
Wang, Danny J J; Jann, Kay; Fan, Chang; Qiao, Yang; Zang, Yu-Feng; Lu, Hanbing; Yang, Yihong
2018-01-01
Recently, non-linear statistical measures such as multi-scale entropy (MSE) have been introduced as indices of the complexity of electrophysiology and fMRI time-series across multiple time scales. In this work, we investigated the neurophysiological underpinnings of complexity (MSE) of electrophysiology and fMRI signals and their relations to functional connectivity (FC). MSE and FC analyses were performed on simulated data using neural mass model based brain network model with the Brain Dynamics Toolbox, on animal models with concurrent recording of fMRI and electrophysiology in conjunction with pharmacological manipulations, and on resting-state fMRI data from the Human Connectome Project. Our results show that the complexity of regional electrophysiology and fMRI signals is positively correlated with network FC. The associations between MSE and FC are dependent on the temporal scales or frequencies, with higher associations between MSE and FC at lower temporal frequencies. Our results from theoretical modeling, animal experiment and human fMRI indicate that (1) Regional neural complexity and network FC may be two related aspects of brain's information processing: the more complex regional neural activity, the higher FC this region has with other brain regions; (2) MSE at high and low frequencies may represent local and distributed information processing across brain regions. Based on literature and our data, we propose that the complexity of regional neural signals may serve as an index of the brain's capacity of information processing-increased complexity may indicate greater transition or exploration between different states of brain networks, thereby a greater propensity for information processing.
Epidemic modeling in complex realities.
Colizza, Vittoria; Barthélemy, Marc; Barrat, Alain; Vespignani, Alessandro
2007-04-01
In our global world, the increasing complexity of social relations and transport infrastructures are key factors in the spread of epidemics. In recent years, the increasing availability of computer power has enabled both to obtain reliable data allowing one to quantify the complexity of the networks on which epidemics may propagate and to envision computational tools able to tackle the analysis of such propagation phenomena. These advances have put in evidence the limits of homogeneous assumptions and simple spatial diffusion approaches, and stimulated the inclusion of complex features and heterogeneities relevant in the description of epidemic diffusion. In this paper, we review recent progresses that integrate complex systems and networks analysis with epidemic modelling and focus on the impact of the various complex features of real systems on the dynamics of epidemic spreading.
Goel, Nidhi; Singh, Udai P
2013-10-10
Four new acid-base complexes using picric acid [(OH)(NO2)3C6H2] (PA) and N-heterocyclic bases (1,10-phenanthroline (phen)/2,2';6',2"-terpyridine (terpy)/hexamethylenetetramine (hmta)/2,4,6-tri(2-pyridyl)-1,3,5-triazine (tptz)) were prepared and characterized by elemental analysis, IR, NMR and X-ray crystallography. Crystal structures provide detailed information of the noncovalent interactions present in different complexes. The optimized structures of the complexes were calculated in terms of the density functional theory. The thermolysis of these complexes was investigated by TG-DSC and ignition delay measurements. The model-free isoconversional and model-fitting kinetic approaches have been applied to isothermal TG data for kinetics investigation of thermal decomposition of these complexes.
NASA Astrophysics Data System (ADS)
de Vries, R.
2004-02-01
Electrostatic complexation of flexible polyanions with the whey proteins α-lactalbumin and β-lactoglobulin is studied using Monte Carlo simulations. The proteins are considered at their respective isoelectric points. Discrete charges on the model polyelectrolytes and proteins interact through Debye-Hückel potentials. Protein excluded volume is taken into account through a coarse-grained model of the protein shape. Consistent with experimental results, it is found that α-lactalbumin complexes much more strongly than β-lactoglobulin. For α-lactalbumin, strong complexation is due to localized binding to a single large positive "charge patch," whereas for β-lactoglobulin, weak complexation is due to diffuse binding to multiple smaller charge patches.
Synthesis and CV Studies of Dithiol-terminated Metal Terpyridine Complexes
NASA Technical Reports Server (NTRS)
Asano, Sylvia; Fan, Wendy; Ng, Hou-Tee; Han, Jie; Meyyappan, M.
2003-01-01
Transition metal coordination complexes possess unique electronic structures that should be a good model for studying electronic transport behavior at a molecular level. The discrete, multiple redox states, low redox potential and the superb ability to establish contact with other molecular and electronic components by coordination chemistry have made this a subject of investigation for their possible application as active electronic components in molecular devices. We present the synthesis and electrochemical characterization of 4'-thioacetylphenyl-2'2:6',2"-terpyridine iron(II) complex and compare it with a model bis-terpyridine iron(II) complex by cyclic voltammetry. With the use of different working electrodes, the behavior of these complexes show different electron transfer rates.
Development and evaluation of a musculoskeletal model of the elbow joint complex
NASA Technical Reports Server (NTRS)
Gonzalez, Roger V.; Hutchins, E. L.; Barr, Ronald E.; Abraham, Lawrence D.
1993-01-01
This paper describes the development and evaluation of a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. The length, velocity, and moment arm for each of the eight musculotendon actuators were based on skeletal anatomy and position. Musculotendon parameters were determined for each actuator and verified by comparing analytical torque-angle curves with experimental joint torque data. The parameters and skeletal geometry were also utilized in the musculoskeletal model for the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by parameterized optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing ballistic elbow joint complex movements.
Ontology patterns for complex topographic feature yypes
Varanka, Dalia E.
2011-01-01
Complex feature types are defined as integrated relations between basic features for a shared meaning or concept. The shared semantic concept is difficult to define in commonly used geographic information systems (GIS) and remote sensing technologies. The role of spatial relations between complex feature parts was recognized in early GIS literature, but had limited representation in the feature or coverage data models of GIS. Spatial relations are more explicitly specified in semantic technology. In this paper, semantics for topographic feature ontology design patterns (ODP) are developed as data models for the representation of complex features. In the context of topographic processes, component assemblages are supported by resource systems and are found on local landscapes. The topographic ontology is organized across six thematic modules that can account for basic feature types, resource systems, and landscape types. Types of complex feature attributes include location, generative processes and physical description. Node/edge networks model standard spatial relations and relations specific to topographic science to represent complex features. To demonstrate these concepts, data from The National Map of the U. S. Geological Survey was converted and assembled into ODP.
Complex networks as an emerging property of hierarchical preferential attachment.
Hébert-Dufresne, Laurent; Laurence, Edward; Allard, Antoine; Young, Jean-Gabriel; Dubé, Louis J
2015-12-01
Real complex systems are not rigidly structured; no clear rules or blueprints exist for their construction. Yet, amidst their apparent randomness, complex structural properties universally emerge. We propose that an important class of complex systems can be modeled as an organization of many embedded levels (potentially infinite in number), all of them following the same universal growth principle known as preferential attachment. We give examples of such hierarchy in real systems, for instance, in the pyramid of production entities of the film industry. More importantly, we show how real complex networks can be interpreted as a projection of our model, from which their scale independence, their clustering, their hierarchy, their fractality, and their navigability naturally emerge. Our results suggest that complex networks, viewed as growing systems, can be quite simple, and that the apparent complexity of their structure is largely a reflection of their unobserved hierarchical nature.
Complex networks as an emerging property of hierarchical preferential attachment
NASA Astrophysics Data System (ADS)
Hébert-Dufresne, Laurent; Laurence, Edward; Allard, Antoine; Young, Jean-Gabriel; Dubé, Louis J.
2015-12-01
Real complex systems are not rigidly structured; no clear rules or blueprints exist for their construction. Yet, amidst their apparent randomness, complex structural properties universally emerge. We propose that an important class of complex systems can be modeled as an organization of many embedded levels (potentially infinite in number), all of them following the same universal growth principle known as preferential attachment. We give examples of such hierarchy in real systems, for instance, in the pyramid of production entities of the film industry. More importantly, we show how real complex networks can be interpreted as a projection of our model, from which their scale independence, their clustering, their hierarchy, their fractality, and their navigability naturally emerge. Our results suggest that complex networks, viewed as growing systems, can be quite simple, and that the apparent complexity of their structure is largely a reflection of their unobserved hierarchical nature.
Strategies and Rubrics for Teaching Complex Systems Theory to Novices (Invited)
NASA Astrophysics Data System (ADS)
Fichter, L. S.
2010-12-01
Bifurcation. Self-similarity. Fractal. Sensitive dependent. Agents. Self-organized criticality. Avalanche behavior. Power laws. Strange attractors. Emergence. The language of complexity is fundamentally different from the language of equilibrium. If students do not know these phenomena, and what they tell us about the pulse of dynamic systems, complex systems will be opaque. A complex system is a group of agents. (individual interacting units, like birds in a flock, sand grains in a ripple, or individual friction units along a fault zone), existing far from equilibrium, interacting through positive and negative feedbacks, following simple rules, forming interdependent, dynamic, evolutionary networks. Complex systems produce behaviors that cannot be predicted deductively from knowledge of the behaviors of the individual components themselves; they must be experienced. What complexity theory demonstrates is that, by following simple rules, all the agents end up coordinating their behavior—self organizing—so that what emerges is not chaos, but meaningful patterns. How can we introduce Freshman, non-science, general education students to complex systems theories, in 3 to 5 classes; in a way they really get it, and can use the principles to understand real systems? Complex systems theories are not a series of unconnected or disconnected equations or models; they are developed as narratives that makes sense of how all the pieces and properties are interrelated. The principles of complex systems must be taught as deliberately and systematically as the equilibrium principles normally taught; as, say, the systematic training from pre-algebra and geometry to algebra. We have developed a sequence of logically connected narratives (strategies and rubrics) that introduce complex systems principles using models that can be simulated in a computer, in class, in real time. The learning progression has a series of 12 models (e.g. logistic system, bifurcation diagrams, genetic algorithms, etc.) leading to 19 learning outcomes that encompass most of the universality properties that characterize complex systems. They are developed in a specific order to achieve specific ends of understanding. We use these models in various depths and formats in courses ranging from gened courses, to evolutionary systems and environmental systems, to upper level geology courses. Depending on the goals of a course, the learning outcomes can be applied to understanding many other complex systems; e.g. oscillating chemical reactions (reaction-diffusion and activator-inhibitor systems), autocatalytic networks, hysteresis (bistable) systems, networks, and the rise/collapse of complex societies. We use these and other complex systems concepts in various classes to talk about the origin of life, ecosystem organization, game theory, extinction events, and environmental system behaviors. The applications are almost endless. The complete learning progression with models, computer programs, experiments, and learning outcomes is available at: www.jmu.edu/geology/ComplexEvolutionarySystems/
Using machine learning tools to model complex toxic interactions with limited sampling regimes.
Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W
2013-03-19
A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.
Turner-Stokes, Lynne; Sutch, Stephen; Dredge, Robert
2012-03-01
To describe the rationale and development of a casemix model and costing methodology for tariff development for specialist neurorehabilitation services in the UK. Patients with complex needs incur higher treatment costs. Fair payment should be weighted in proportion to costs of providing treatment, and should allow for variation over time CASEMIX MODEL AND BAND-WEIGHTING: Case complexity is measured by the Rehabilitation Complexity Scale (RCS). Cases are divided into five bands of complexity, based on the total RCS score. The principal determinant of costs in rehabilitation is staff time. Total staff hours/week (estimated from the Northwick Park Nursing and Therapy Dependency Scales) are analysed within each complexity band, through cross-sectional analysis of parallel ratings. A 'band-weighting' factor is derived from the relative proportions of staff time within each of the five bands. Total unit treatment costs are obtained from retrospective analysis of provider hospitals' budget and accounting statements. Mean bed-day costs (total unit cost/occupied bed days) are divided broadly into 'variable' and 'non-variable' components. In the weighted costing model, the band-weighting factor is applied to the variable portion of the bed-day cost to derive a banded cost, and thence a set of cost-multipliers. Preliminary data from one unit are presented to illustrate how this weighted costing model will be applied to derive a multilevel banded payment model, based on serial complexity ratings, to allow for change over time.
Modeling and Simulation of Lab-on-a-Chip Systems
2005-08-12
complex chip geometries (including multiple turns). Variations of sample concentration profiles in laminar diffusion-based micromixers are also derived...CHAPTER 6 MODELING OF LAMINAR DIFFUSION-BASED COMPLEX ELECTROKINETIC PASSIVE MICROMIXERS ...140 6.4.4 Multi-Stream (Inter-Digital) Micromixers
Computational methods to predict railcar response to track cross-level variations
DOT National Transportation Integrated Search
1976-09-01
The rocking response of railroad freight cars to track cross-level variations is studied using: (1) a reduced complexity digital simulation model, and (2) a quasi-linear describing function analysis. The reduced complexity digital simulation model em...
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro
2017-04-01
We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
Hill, Renee J.; Chopra, Pradeep; Richardi, Toni
2012-01-01
Abstract Explaining the etiology of Complex Regional Pain Syndrome (CRPS) from the psychogenic model is exceedingly unsophisticated, because neurocognitive deficits, neuroanatomical abnormalities, and distortions in cognitive mapping are features of CRPS pathology. More importantly, many people who have developed CRPS have no history of mental illness. The psychogenic model offers comfort to physicians and mental health practitioners (MHPs) who have difficulty understanding pain maintained by newly uncovered neuro inflammatory processes. With increased education about CRPS through a biopsychosocial perspective, both physicians and MHPs can better diagnose, treat, and manage CRPS symptomatology. PMID:24223338
Mathematical concepts for modeling human behavior in complex man-machine systems
NASA Technical Reports Server (NTRS)
Johannsen, G.; Rouse, W. B.
1979-01-01
Many human behavior (e.g., manual control) models have been found to be inadequate for describing processes in certain real complex man-machine systems. An attempt is made to find a way to overcome this problem by examining the range of applicability of existing mathematical models with respect to the hierarchy of human activities in real complex tasks. Automobile driving is chosen as a baseline scenario, and a hierarchy of human activities is derived by analyzing this task in general terms. A structural description leads to a block diagram and a time-sharing computer analogy.
Investigation of model-based physical design restrictions (Invited Paper)
NASA Astrophysics Data System (ADS)
Lucas, Kevin; Baron, Stanislas; Belledent, Jerome; Boone, Robert; Borjon, Amandine; Couderc, Christophe; Patterson, Kyle; Riviere-Cazaux, Lionel; Rody, Yves; Sundermann, Frank; Toublan, Olivier; Trouiller, Yorick; Urbani, Jean-Christophe; Wimmer, Karl
2005-05-01
As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.
Formalizing the role of agent-based modeling in causal inference and epidemiology.
Marshall, Brandon D L; Galea, Sandro
2015-01-15
Calls for the adoption of complex systems approaches, including agent-based modeling, in the field of epidemiology have largely centered on the potential for such methods to examine complex disease etiologies, which are characterized by feedback behavior, interference, threshold dynamics, and multiple interacting causal effects. However, considerable theoretical and practical issues impede the capacity of agent-based methods to examine and evaluate causal effects and thus illuminate new areas for intervention. We build on this work by describing how agent-based models can be used to simulate counterfactual outcomes in the presence of complexity. We show that these models are of particular utility when the hypothesized causal mechanisms exhibit a high degree of interdependence between multiple causal effects and when interference (i.e., one person's exposure affects the outcome of others) is present and of intrinsic scientific interest. Although not without challenges, agent-based modeling (and complex systems methods broadly) represent a promising novel approach to identify and evaluate complex causal effects, and they are thus well suited to complement other modern epidemiologic methods of etiologic inquiry. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
Refined views of multi-protein complexes in the erythrocyte membrane
Mankelow, TJ; Satchwell, TJ; Burton, NM
2015-01-01
The erythrocyte membrane has been extensively studied, both as a model membrane system and to investigate its role in gas exchange and transport. Much is now known about the protein components of the membrane, how they are organised into large multi-protein complexes and how they interact with each other within these complexes. Many links between the membrane and the cytoskeleton have also been delineated and have been demonstrated to be crucial for maintaining the deformability and integrity of the erythrocyte. In this study we have refined previous, highly speculative molecular models of these complexes by including the available data pertaining to known protein-protein interactions. While the refined models remain highly speculative, they provide an evolving framework for visualisation of these important cellular structures at the atomic level. PMID:22465511
A systems-based approach for integrated design of materials, products and design process chains
NASA Astrophysics Data System (ADS)
Panchal, Jitesh H.; Choi, Hae-Jin; Allen, Janet K.; McDowell, David L.; Mistree, Farrokh
2007-12-01
The concurrent design of materials and products provides designers with flexibility to achieve design objectives that were not previously accessible. However, the improved flexibility comes at a cost of increased complexity of the design process chains and the materials simulation models used for executing the design chains. Efforts to reduce the complexity generally result in increased uncertainty. We contend that a systems based approach is essential for managing both the complexity and the uncertainty in design process chains and simulation models in concurrent material and product design. Our approach is based on simplifying the design process chains systematically such that the resulting uncertainty does not significantly affect the overall system performance. Similarly, instead of striving for accurate models for multiscale systems (that are inherently complex), we rely on making design decisions that are robust to uncertainties in the models. Accordingly, we pursue hierarchical modeling in the context of design of multiscale systems. In this paper our focus is on design process chains. We present a systems based approach, premised on the assumption that complex systems can be designed efficiently by managing the complexity of design process chains. The approach relies on (a) the use of reusable interaction patterns to model design process chains, and (b) consideration of design process decisions using value-of-information based metrics. The approach is illustrated using a Multifunctional Energetic Structural Material (MESM) design example. Energetic materials store considerable energy which can be released through shock-induced detonation; conventionally, they are not engineered for strength properties. The design objectives for the MESM in this paper include both sufficient strength and energy release characteristics. The design is carried out by using models at different length and time scales that simulate different aspects of the system. Finally, by applying the method to the MESM design problem, we show that the integrated design of materials and products can be carried out more efficiently by explicitly accounting for design process decisions with the hierarchy of models.
Palma, P N; Moura, I; LeGall, J; Van Beeumen, J; Wampler, J E; Moura, J J
1994-05-31
Small electron-transfer proteins such as flavodoxin (16 kDa) and the tetraheme cytochrome c3 (13 kDa) have been used to mimic, in vitro, part of the complex electron-transfer chain operating between substrate electron donors and respiratory electron acceptors, in sulfate-reducing bacteria (Desulfovibrio species). The nature and properties of the complex formed between these proteins are revealed by 1H-NMR and molecular modeling approaches. Our previous study with the Desulfovibrio vulgaris proteins [Moura, I., Moura, J.J. G., Santos, M.H., & Xavier, A. V. (1980) Cienc. Biol. (Portugal) 5, 195-197; Stewart, D.E. LeGall, J., Moura, I., Moura, J. J. G., Peck, H.D. Jr., Xavier, A. V., Weiner, P. K., & Wampler, J.E. (1988) Biochemistry 27, 2444-2450] indicated that the complex between cytochrome c3 and flavodoxin could be monitored by changes in the NMR signals of the heme methyl groups of the cytochrome and that the electrostatic surface charge (Coulomb's law) on the two proteins favored interaction between one unique heme of the cytochrome with flavodoxin. If the interaction is indeed driven by the electrostatic complementarity between the acidic flavodoxin and a unique positive region of the cytochrome c3, other homologous proteins from these two families of proteins might be expected to interact similarly. In this study, three homologous Desulfovibrio cytochromes c3 were used, which show a remarkable variation in their individual isoelectric points (ranging from 5.5 to 9.5). On the basis of data obtained from protein-protein titrations followed at specific proton NMR signals (i.e., heme methyl resonances), a binding model for this complex has been developed with evaluation of stoichiometry and binding constants. This binding model involves one site on the cytochromes c3 and two sites on the flavodoxin, with formation of a ternary complex at saturation. In order to understand the potential chemical form of the binding model, a structural model for the hypothetical ternary complex, formed between one molecule of Desulfovibrio salexigens flavodoxin and two molecules of cytochrome c3, is proposed. These molecular models of the complexes were constructed on the basis of complementarity of Coulombic electrostatic surface potentials, using the available X-ray structures of the isolated proteins and, when required, model structures (D. salexigens flavodoxin and Desulfovibrio desulfuricans ATCC 27774 cytochrome c3) predicted by homology modeling.