Multi-scale Modeling in Clinical Oncology: Opportunities and Barriers to Success.
Yankeelov, Thomas E; An, Gary; Saut, Oliver; Luebeck, E Georg; Popel, Aleksander S; Ribba, Benjamin; Vicini, Paolo; Zhou, Xiaobo; Weis, Jared A; Ye, Kaiming; Genin, Guy M
2016-09-01
Hierarchical processes spanning several orders of magnitude of both space and time underlie nearly all cancers. Multi-scale statistical, mathematical, and computational modeling methods are central to designing, implementing and assessing treatment strategies that account for these hierarchies. The basic science underlying these modeling efforts is maturing into a new discipline that is close to influencing and facilitating clinical successes. The purpose of this review is to capture the state-of-the-art as well as the key barriers to success for multi-scale modeling in clinical oncology. We begin with a summary of the long-envisioned promise of multi-scale modeling in clinical oncology, including the synthesis of disparate data types into models that reveal underlying mechanisms and allow for experimental testing of hypotheses. We then evaluate the mathematical techniques employed most widely and present several examples illustrating their application as well as the current gap between pre-clinical and clinical applications. We conclude with a discussion of what we view to be the key challenges and opportunities for multi-scale modeling in clinical oncology.
Multi-scale Modeling in Clinical Oncology: Opportunities and Barriers to Success
Yankeelov, Thomas E.; An, Gary; Saut, Oliver; Luebeck, E. Georg; Popel, Aleksander S.; Ribba, Benjamin; Vicini, Paolo; Zhou, Xiaobo; Weis, Jared A.; Ye, Kaiming; Genin, Guy M.
2016-01-01
Hierarchical processes spanning several orders of magnitude of both space and time underlie nearly all cancers. Multi-scale statistical, mathematical, and computational modeling methods are central to designing, implementing and assessing treatment strategies that account for these hierarchies. The basic science underlying these modeling efforts is maturing into a new discipline that is close to influencing and facilitating clinical successes. The purpose of this review is to capture the state-of-the-art as well as the key barriers to success for multi-scale modeling in clinical oncology. We begin with a summary of the long-envisioned promise of multi-scale modeling in clinical oncology, including the synthesis of disparate data types into models that reveal underlying mechanisms and allow for experimental testing of hypotheses. We then evaluate the mathematical techniques employed most widely and present several examples illustrating their application as well as the current gap between pre-clinical and clinical applications. We conclude with a discussion of what we view to be the key challenges and opportunities for multi-scale modeling in clinical oncology. PMID:27384942
Multi-scale modelling of rubber-like materials and soft tissues: an appraisal
Puglisi, G.
2016-01-01
We survey, in a partial way, multi-scale approaches for the modelling of rubber-like and soft tissues and compare them with classical macroscopic phenomenological models. Our aim is to show how it is possible to obtain practical mathematical models for the mechanical behaviour of these materials incorporating mesoscopic (network scale) information. Multi-scale approaches are crucial for the theoretical comprehension and prediction of the complex mechanical response of these materials. Moreover, such models are fundamental in the perspective of the design, through manipulation at the micro- and nano-scales, of new polymeric and bioinspired materials with exceptional macroscopic properties. PMID:27118927
ERIC Educational Resources Information Center
Kappler, Ulrike; Rowland, Susan L.; Pedwell, Rhianna K.
2017-01-01
Systems biology is frequently taught with an emphasis on mathematical modeling approaches. This focus effectively excludes most biology, biochemistry, and molecular biology students, who are not mathematics majors. The mathematical focus can also present a misleading picture of systems biology, which is a multi-disciplinary pursuit requiring…
Scale Interactions in the Tropics from a Simple Multi-Cloud Model
NASA Astrophysics Data System (ADS)
Niu, X.; Biello, J. A.
2017-12-01
Our lack of a complete understanding of the interaction between the moisture convection and equatorial waves remains an impediment in the numerical simulation of large-scale organization, such as the Madden-Julian Oscillation (MJO). The aim of this project is to understand interactions across spatial scales in the tropics from a simplified framework for scale interactions while a using a simplified framework to describe the basic features of moist convection. Using multiple asymptotic scales, Biello and Majda[1] derived a multi-scale model of moist tropical dynamics (IMMD[1]), which separates three regimes: the planetary scale climatology, the synoptic scale waves, and the planetary scale anomalies regime. The scales and strength of the observed MJO would categorize it in the regime of planetary scale anomalies - which themselves are forced from non-linear upscale fluxes from the synoptic scales waves. In order to close this model and determine whether it provides a self-consistent theory of the MJO. A model for diabatic heating due to moist convection must be implemented along with the IMMD. The multi-cloud parameterization is a model proposed by Khouider and Majda[2] to describe the three basic cloud types (congestus, deep and stratiform) that are most responsible for tropical diabatic heating. We implement a simplified version of the multi-cloud model that is based on results derived from large eddy simulations of convection [3]. We present this simplified multi-cloud model and show results of numerical experiments beginning with a variety of convective forcing states. Preliminary results on upscale fluxes, from synoptic scales to planetary scale anomalies, will be presented. [1] Biello J A, Majda A J. Intraseasonal multi-scale moist dynamics of the tropical atmosphere[J]. Communications in Mathematical Sciences, 2010, 8(2): 519-540. [2] Khouider B, Majda A J. A simple multicloud parameterization for convectively coupled tropical waves. Part I: Linear analysis[J]. Journal of the atmospheric sciences, 2006, 63(4): 1308-1323. [3] Dorrestijn J, Crommelin D T, Biello J A, et al. A data-driven multi-cloud model for stochastic parametrization of deep convection[J]. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 2013, 371(1991): 20120374.
A tool for multi-scale modelling of the renal nephron
Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.
2011-01-01
We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation. PMID:25405760
Multiscale Modeling in the Clinic: Drug Design and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, Colleen E.; An, Gary; Cannon, William R.
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less
Mathematics, Information, and Life Sciences
2012-03-05
INS • Chip -scale atomic clocks • Ad hoc networks • Polymorphic networks • Agile networks • Laser communications • Frequency-agile RF systems...FY12 BAA Bionavigation (Bio) Neuromorphic Computing (Human) Multi-scale Modeling (Math) Foundations of Information Systems (Info) BRI
Advances in multi-scale modeling of solidification and casting processes
NASA Astrophysics Data System (ADS)
Liu, Baicheng; Xu, Qingyan; Jing, Tao; Shen, Houfa; Han, Zhiqiang
2011-04-01
The development of the aviation, energy and automobile industries requires an advanced integrated product/process R&D systems which could optimize the product and the process design as well. Integrated computational materials engineering (ICME) is a promising approach to fulfill this requirement and make the product and process development efficient, economic, and environmentally friendly. Advances in multi-scale modeling of solidification and casting processes, including mathematical models as well as engineering applications are presented in the paper. Dendrite morphology of magnesium and aluminum alloy of solidification process by using phase field and cellular automaton methods, mathematical models of segregation of large steel ingot, and microstructure models of unidirectionally solidified turbine blade casting are studied and discussed. In addition, some engineering case studies, including microstructure simulation of aluminum casting for automobile industry, segregation of large steel ingot for energy industry, and microstructure simulation of unidirectionally solidified turbine blade castings for aviation industry are discussed.
Validation of a multi-phase plant-wide model for the description of the aeration process in a WWTP.
Lizarralde, I; Fernández-Arévalo, T; Beltrán, S; Ayesa, E; Grau, P
2018-02-01
This paper introduces a new mathematical model built under the PC-PWM methodology to describe the aeration process in a full-scale WWTP. This methodology enables a systematic and rigorous incorporation of chemical and physico-chemical transformations into biochemical process models, particularly for the description of liquid-gas transfer to describe the aeration process. The mathematical model constructed is able to reproduce biological COD and nitrogen removal, liquid-gas transfer and chemical reactions. The capability of the model to describe the liquid-gas mass transfer has been tested by comparing simulated and experimental results in a full-scale WWTP. Finally, an exploration by simulation has been undertaken to show the potential of the mathematical model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Simulation of Left Atrial Function Using a Multi-Scale Model of the Cardiovascular System
Pironet, Antoine; Dauby, Pierre C.; Paeme, Sabine; Kosta, Sarah; Chase, J. Geoffrey; Desaive, Thomas
2013-01-01
During a full cardiac cycle, the left atrium successively behaves as a reservoir, a conduit and a pump. This complex behavior makes it unrealistic to apply the time-varying elastance theory to characterize the left atrium, first, because this theory has known limitations, and second, because it is still uncertain whether the load independence hypothesis holds. In this study, we aim to bypass this uncertainty by relying on another kind of mathematical model of the cardiac chambers. In the present work, we describe both the left atrium and the left ventricle with a multi-scale model. The multi-scale property of this model comes from the fact that pressure inside a cardiac chamber is derived from a model of the sarcomere behavior. Macroscopic model parameters are identified from reference dog hemodynamic data. The multi-scale model of the cardiovascular system including the left atrium is then simulated to show that the physiological roles of the left atrium are correctly reproduced. This include a biphasic pressure wave and an eight-shaped pressure-volume loop. We also test the validity of our model in non basal conditions by reproducing a preload reduction experiment by inferior vena cava occlusion with the model. We compute the variation of eight indices before and after this experiment and obtain the same variation as experimentally observed for seven out of the eight indices. In summary, the multi-scale mathematical model presented in this work is able to correctly account for the three roles of the left atrium and also exhibits a realistic left atrial pressure-volume loop. Furthermore, the model has been previously presented and validated for the left ventricle. This makes it a proper alternative to the time-varying elastance theory if the focus is set on precisely representing the left atrial and left ventricular behaviors. PMID:23755183
Multi-scale and Multi-physics Numerical Methods for Modeling Transport in Mesoscopic Systems
2014-10-13
function and wide band Fast multipole methods for Hankel waves. (2) a new linear scaling discontinuous Galerkin density functional theory, which provide a...inflow boundary condition for Wigner quantum transport equations. Also, a book titled "Computational Methods for Electromagnetic Phenomena...equationsin layered media with FMM for Bessel functions , Science China Mathematics, (12 2013): 2561. doi: TOTAL: 6 Number of Papers published in peer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Qiang
The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of whichmore » is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.« less
Will big data yield new mathematics? An evolving synergy with neuroscience
Feng, S.; Holmes, P.
2016-01-01
New mathematics has often been inspired by new insights into the natural world. Here we describe some ongoing and possible future interactions among the massive data sets being collected in neuroscience, methods for their analysis and mathematical models of the underlying, still largely uncharted neural substrates that generate these data. We start by recalling events that occurred in turbulence modelling when substantial space-time velocity field measurements and numerical simulations allowed a new perspective on the governing equations of fluid mechanics. While no analogous global mathematical model of neural processes exists, we argue that big data may enable validation or at least rejection of models at cellular to brain area scales and may illuminate connections among models. We give examples of such models and survey some relatively new experimental technologies, including optogenetics and functional imaging, that can report neural activity in live animals performing complex tasks. The search for analytical techniques for these data is already yielding new mathematics, and we believe their multi-scale nature may help relate well-established models, such as the Hodgkin–Huxley equations for single neurons, to more abstract models of neural circuits, brain areas and larger networks within the brain. In brief, we envisage a closer liaison, if not a marriage, between neuroscience and mathematics. PMID:27516705
Will big data yield new mathematics? An evolving synergy with neuroscience.
Feng, S; Holmes, P
2016-06-01
New mathematics has often been inspired by new insights into the natural world. Here we describe some ongoing and possible future interactions among the massive data sets being collected in neuroscience, methods for their analysis and mathematical models of the underlying, still largely uncharted neural substrates that generate these data. We start by recalling events that occurred in turbulence modelling when substantial space-time velocity field measurements and numerical simulations allowed a new perspective on the governing equations of fluid mechanics. While no analogous global mathematical model of neural processes exists, we argue that big data may enable validation or at least rejection of models at cellular to brain area scales and may illuminate connections among models. We give examples of such models and survey some relatively new experimental technologies, including optogenetics and functional imaging, that can report neural activity in live animals performing complex tasks. The search for analytical techniques for these data is already yielding new mathematics, and we believe their multi-scale nature may help relate well-established models, such as the Hodgkin-Huxley equations for single neurons, to more abstract models of neural circuits, brain areas and larger networks within the brain. In brief, we envisage a closer liaison, if not a marriage, between neuroscience and mathematics.
Integrated network analysis and effective tools in plant systems biology
Fukushima, Atsushi; Kanaya, Shigehiko; Nishida, Kozo
2014-01-01
One of the ultimate goals in plant systems biology is to elucidate the genotype-phenotype relationship in plant cellular systems. Integrated network analysis that combines omics data with mathematical models has received particular attention. Here we focus on the latest cutting-edge computational advances that facilitate their combination. We highlight (1) network visualization tools, (2) pathway analyses, (3) genome-scale metabolic reconstruction, and (4) the integration of high-throughput experimental data and mathematical models. Multi-omics data that contain the genome, transcriptome, proteome, and metabolome and mathematical models are expected to integrate and expand our knowledge of complex plant metabolisms. PMID:25408696
Mathematical model comparing of the multi-level economics systems
NASA Astrophysics Data System (ADS)
Brykalov, S. M.; Kryanev, A. V.
2017-12-01
The mathematical model (scheme) of a multi-level comparison of the economic system, characterized by the system of indices, is worked out. In the mathematical model of the multi-level comparison of the economic systems, the indicators of peer review and forecasting of the economic system under consideration can be used. The model can take into account the uncertainty in the estimated values of the parameters or expert estimations. The model uses the multi-criteria approach based on the Pareto solutions.
Sanga, Sandeep; Frieboes, Hermann B.; Zheng, Xiaoming; Gatenby, Robert; Bearer, Elaine L.; Cristini, Vittorio
2007-01-01
Empirical evidence and theoretical studies suggest that the phenotype, i.e., cellular- and molecular-scale dynamics, including proliferation rate and adhesiveness due to microenvironmental factors and gene expression that govern tumor growth and invasiveness, also determine gross tumor-scale morphology. It has been difficult to quantify the relative effect of these links on disease progression and prognosis using conventional clinical and experimental methods and observables. As a result, successful individualized treatment of highly malignant and invasive cancers, such as glioblastoma, via surgical resection and chemotherapy cannot be offered and outcomes are generally poor. What is needed is a deterministic, quantifiable method to enable understanding of the connections between phenotype and tumor morphology. Here, we critically review advantages and disadvantages of recent computational modeling efforts (e.g., continuum, discrete, and cellular automata models) that have pursued this understanding. Based on this assessment, we propose and discuss a multi-scale, i.e., from the molecular to the gross tumor scale, mathematical and computational “first-principle” approach based on mass conservation and other physical laws, such as employed in reaction-diffusion systems. Model variables describe known characteristics of tumor behavior, and parameters and functional relationships across scales are informed from in vitro, in vivo and ex vivo biology. We demonstrate that this methodology, once coupled to tumor imaging and tumor biopsy or cell culture data, should enable prediction of tumor growth and therapy outcome through quantification of the relation between the underlying dynamics and morphological characteristics. In particular, morphologic stability analysis of this mathematical model reveals that tumor cell patterning at the tumor-host interface is regulated by cell proliferation, adhesion and other phenotypic characteristics: histopathology information of tumor boundary can be inputted to the mathematical model and used as phenotype-diagnostic tool and thus to predict collective and individual tumor cell invasion of surrounding host. This approach further provides a means to deterministically test effects of novel and hypothetical therapy strategies on tumor behavior. PMID:17629503
NASA Astrophysics Data System (ADS)
Liu, C.; Yang, X.; Bailey, V. L.; Bond-Lamberty, B. P.; Hinkle, C.
2013-12-01
Mathematical representations of hydrological and biogeochemical processes in soil, plant, aquatic, and atmospheric systems vary with scale. Process-rich models are typically used to describe hydrological and biogeochemical processes at the pore and small scales, while empirical, correlation approaches are often used at the watershed and regional scales. A major challenge for multi-scale modeling is that water flow, biogeochemical processes, and reactive transport are described using different physical laws and/or expressions at the different scales. For example, the flow is governed by the Navier-Stokes equations at the pore-scale in soils, by the Darcy law in soil columns and aquifer, and by the Navier-Stokes equations again in open water bodies (ponds, lake, river) and atmosphere surface layer. This research explores whether the physical laws at the different scales and in different physical domains can be unified to form a unified multi-scale model (UMSM) to systematically investigate the cross-scale, cross-domain behavior of fundamental processes at different scales. This presentation will discuss our research on the concept, mathematical equations, and numerical execution of the UMSM. Three-dimensional, multi-scale hydrological processes at the Disney Wilderness Preservation (DWP) site, Florida will be used as an example for demonstrating the application of the UMSM. In this research, the UMSM was used to simulate hydrological processes in rooting zones at the pore and small scales including water migration in soils under saturated and unsaturated conditions, root-induced hydrological redistribution, and role of rooting zone biogeochemical properties (e.g., root exudates and microbial mucilage) on water storage and wetting/draining. The small scale simulation results were used to estimate effective water retention properties in soil columns that were superimposed on the bulk soil water retention properties at the DWP site. The UMSM parameterized from smaller scale simulations were then used to simulate coupled flow and moisture migration in soils in saturated and unsaturated zones, surface and groundwater exchange, and surface water flow in streams and lakes at the DWP site under dynamic precipitation conditions. Laboratory measurements of soil hydrological and biogeochemical properties are used to parameterize the UMSM at the small scales, and field measurements are used to evaluate the UMSM.
Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.
Green, Sara; Batterman, Robert
2017-02-01
A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.
Preliminary Findings from a Multi-Year Scale-Up Effectiveness Trial of Everyday Mathematics
ERIC Educational Resources Information Center
Vaden-Kiernan, Michael; Borman, Geoffrey; Caverly, Sarah; Bell, Nance; Ruiz de Castilla, Veronica; Sullivan, Kate
2015-01-01
Given the importance of early mathematics instruction and curricula for preventing mathematics difficulties in later grades, it is necessary to identify effective mathematics curricula and instruction to ensure that children become proficient in early mathematics content and procedures. Everyday Mathematics (EM), was reviewed by the What Works…
Multi-Party, Whole-Body Interactions in Mathematical Activity
ERIC Educational Resources Information Center
Ma, Jasmine Y.
2017-01-01
This study interrogates the contributions of multi-party, whole-body interactions to students' collaboration and negotiation of mathematics ideas in a task setting called walking scale geometry, where bodies in interaction became complex resources for students' emerging goals in problem solving. Whole bodies took up overlapping roles representing…
NASA Astrophysics Data System (ADS)
Huang, J. H.; Wang, X. J.; Wang, J.
2016-02-01
The primary purpose of this paper is to propose a mathematical model of PLZT ceramic with coupled multi-physics fields, e.g. thermal, electric, mechanical and light field. To this end, the coupling relationships of multi-physics fields and the mechanism of some effects resulting in the photostrictive effect are analyzed theoretically, based on which a mathematical model considering coupled multi-physics fields is established. According to the analysis and experimental results, the mathematical model can explain the hysteresis phenomenon and the variation trend of the photo-induced voltage very well and is in agreement with the experimental curves. In addition, the PLZT bimorph is applied as an energy transducer for a photovoltaic-electrostatic hybrid actuated micromirror, and the relation of the rotation angle and the photo-induced voltage is discussed based on the novel photostrictive mathematical model.
Moss, Robert; Grosse, Thibault; Marchant, Ivanny; Lassau, Nathalie; Gueyffier, François; Thomas, S. Randall
2012-01-01
Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a “virtual population” from which “virtual individuals” can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the “virtual individuals” that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models. PMID:22761561
NASA Astrophysics Data System (ADS)
Aksenov, A. G.; Chechetkin, V. M.
2018-04-01
Most of the energy released in the gravitational collapse of the cores of massive stars is carried away by neutrinos. Neutrinos play a pivotal role in explaining core-collape supernovae. Currently, mathematical models of the gravitational collapse are based on multi-dimensional gas dynamics and thermonuclear reactions, while neutrino transport is considered in a simplified way. Multidimensional gas dynamics is used with neutrino transport in the flux-limited diffusion approximation to study the role of multi-dimensional effects. The possibility of large-scale convection is discussed, which is interesting both for explaining SN II and for setting up observations to register possible high-energy (≳10MeV) neutrinos from the supernova. A new multi-dimensional, multi-temperature gas dynamics method with neutrino transport is presented.
NASA Astrophysics Data System (ADS)
Schmidt, M.; Hugentobler, U.; Jakowski, N.; Dettmering, D.; Liang, W.; Limberger, M.; Wilken, V.; Gerzen, T.; Hoque, M.; Berdermann, J.
2012-04-01
Near real-time high resolution and high precision ionosphere models are needed for a large number of applications, e.g. in navigation, positioning, telecommunications or astronautics. Today these ionosphere models are mostly empirical, i.e., based purely on mathematical approaches. In the DFG project 'Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques (MuSIK)' the complex phenomena within the ionosphere are described vertically by combining the Chapman electron density profile with a plasmasphere layer. In order to consider the horizontal and temporal behaviour the fundamental target parameters of this physics-motivated approach are modelled by series expansions in terms of tensor products of localizing B-spline functions depending on longitude, latitude and time. For testing the procedure the model will be applied to an appropriate region in South America, which covers relevant ionospheric processes and phenomena such as the Equatorial Anomaly. The project connects the expertise of the three project partners, namely Deutsches Geodätisches Forschungsinstitut (DGFI) Munich, the Institute of Astronomical and Physical Geodesy (IAPG) of the Technical University Munich (TUM) and the German Aerospace Center (DLR), Neustrelitz. In this presentation we focus on the current status of the project. In the first year of the project we studied the behaviour of the ionosphere in the test region, we setup appropriate test periods covering high and low solar activity as well as winter and summer and started the data collection, analysis, pre-processing and archiving. We developed partly the mathematical-physical modelling approach and performed first computations based on simulated input data. Here we present information on the data coverage for the area and the time periods of our investigations and we outline challenges of the multi-dimensional mathematical-physical modelling approach. We show first results, discuss problems in modelling and possible solution strategies and finally, we address open questions.
Santos, Guido; Lai, Xin; Eberhardt, Martin; Vera, Julio
2018-01-01
Pneumococcal infection is the most frequent cause of pneumonia, and one of the most prevalent diseases worldwide. The population groups at high risk of death from bacterial pneumonia are infants, elderly and immunosuppressed people. These groups are more vulnerable because they have immature or impaired immune systems, the efficacy of their response to vaccines is lower, and antibiotic treatment often does not take place until the inflammatory response triggered is already overwhelming. The immune response to bacterial lung infections involves dynamic interactions between several types of cells whose activation is driven by intracellular molecular networks. A feasible approach to the integration of knowledge and data linking tissue, cellular and intracellular events and the construction of hypotheses in this area is the use of mathematical modeling. For this paper, we used a multi-level computational model to analyse the role of cellular and molecular interactions during the first 10 h after alveolar invasion of Streptococcus pneumoniae bacteria. By “multi-level” we mean that we simulated the interplay between different temporal and spatial scales in a single computational model. In this instance, we included the intracellular scale of processes driving lung epithelial cell activation together with the scale of cell-to-cell interactions at the alveolar tissue. In our analysis, we combined systematic model simulations with logistic regression analysis and decision trees to find genotypic-phenotypic signatures that explain differences in bacteria strain infectivity. According to our simulations, pneumococci benefit from a high dwelling probability and a high proliferation rate during the first stages of infection. In addition to this, the model predicts that during the very early phases of infection the bacterial capsule could be an impediment to the establishment of the alveolar infection because it impairs bacterial colonization. PMID:29868515
Santos, Guido; Lai, Xin; Eberhardt, Martin; Vera, Julio
2018-01-01
Pneumococcal infection is the most frequent cause of pneumonia, and one of the most prevalent diseases worldwide. The population groups at high risk of death from bacterial pneumonia are infants, elderly and immunosuppressed people. These groups are more vulnerable because they have immature or impaired immune systems, the efficacy of their response to vaccines is lower, and antibiotic treatment often does not take place until the inflammatory response triggered is already overwhelming. The immune response to bacterial lung infections involves dynamic interactions between several types of cells whose activation is driven by intracellular molecular networks. A feasible approach to the integration of knowledge and data linking tissue, cellular and intracellular events and the construction of hypotheses in this area is the use of mathematical modeling. For this paper, we used a multi-level computational model to analyse the role of cellular and molecular interactions during the first 10 h after alveolar invasion of Streptococcus pneumoniae bacteria. By "multi-level" we mean that we simulated the interplay between different temporal and spatial scales in a single computational model. In this instance, we included the intracellular scale of processes driving lung epithelial cell activation together with the scale of cell-to-cell interactions at the alveolar tissue. In our analysis, we combined systematic model simulations with logistic regression analysis and decision trees to find genotypic-phenotypic signatures that explain differences in bacteria strain infectivity. According to our simulations, pneumococci benefit from a high dwelling probability and a high proliferation rate during the first stages of infection. In addition to this, the model predicts that during the very early phases of infection the bacterial capsule could be an impediment to the establishment of the alveolar infection because it impairs bacterial colonization.
Powathil, Gibin G; Swat, Maciej; Chaplain, Mark A J
2015-02-01
The multiscale complexity of cancer as a disease necessitates a corresponding multiscale modelling approach to produce truly predictive mathematical models capable of improving existing treatment protocols. To capture all the dynamics of solid tumour growth and its progression, mathematical modellers need to couple biological processes occurring at various spatial and temporal scales (from genes to tissues). Because effectiveness of cancer therapy is considerably affected by intracellular and extracellular heterogeneities as well as by the dynamical changes in the tissue microenvironment, any model attempt to optimise existing protocols must consider these factors ultimately leading to improved multimodal treatment regimes. By improving existing and building new mathematical models of cancer, modellers can play important role in preventing the use of potentially sub-optimal treatment combinations. In this paper, we analyse a multiscale computational mathematical model for cancer growth and spread, incorporating the multiple effects of radiation therapy and chemotherapy in the patient survival probability and implement the model using two different cell based modelling techniques. We show that the insights provided by such multiscale modelling approaches can ultimately help in designing optimal patient-specific multi-modality treatment protocols that may increase patients quality of life. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Novel Approach to Develop the Lower Order Model of Multi-Input Multi-Output System
NASA Astrophysics Data System (ADS)
Rajalakshmy, P.; Dharmalingam, S.; Jayakumar, J.
2017-10-01
A mathematical model is a virtual entity that uses mathematical language to describe the behavior of a system. Mathematical models are used particularly in the natural sciences and engineering disciplines like physics, biology, and electrical engineering as well as in the social sciences like economics, sociology and political science. Physicists, Engineers, Computer scientists, and Economists use mathematical models most extensively. With the advent of high performance processors and advanced mathematical computations, it is possible to develop high performing simulators for complicated Multi Input Multi Ouptut (MIMO) systems like Quadruple tank systems, Aircrafts, Boilers etc. This paper presents the development of the mathematical model of a 500 MW utility boiler which is a highly complex system. A synergistic combination of operational experience, system identification and lower order modeling philosophy has been effectively used to develop a simplified but accurate model of a circulation system of a utility boiler which is a MIMO system. The results obtained are found to be in good agreement with the physics of the process and with the results obtained through design procedure. The model obtained can be directly used for control system studies and to realize hardware simulators for boiler testing and operator training.
Three essays on multi-level optimization models and applications
NASA Astrophysics Data System (ADS)
Rahdar, Mohammad
The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation problem in each node and decreasing the number of iterations. Computational experiments show that the proposed algorithm is faster than the existing ones.
Final Technical Report: Mathematical Foundations for Uncertainty Quantification in Materials Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr; Vlachos, Dionisios G.
We developed path-wise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of non-equilibrium extended molecular systems. The combination of these novel methodologies provided the first methods in the literature which are capable to handle UQ questions for stochastic complex systems with some or all of the following features: (a) multi-scale stochastic models such as (bio)chemical reaction networks, with a very large number of parameters, (b) spatially distributed systems such as Kinetic Monte Carlo or Langevin Dynamics, (c) non-equilibrium processes typically associated with coupled physico-chemical mechanisms, driven boundary conditions, hybrid micro-macro systems,more » etc. A particular computational challenge arises in simulations of multi-scale reaction networks and molecular systems. Mathematical techniques were applied to in silico prediction of novel materials with emphasis on the effect of microstructure on model uncertainty quantification (UQ). We outline acceleration methods to make calculations of real chemistry feasible followed by two complementary tasks on structure optimization and microstructure-induced UQ.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majda, Andrew J.; Xing, Yulong; Mohammadian, Majid
Determining the finite-amplitude preconditioned states in the hurricane embryo, which lead to tropical cyclogenesis, is a central issue in contemporary meteorology. In the embryo there is competition between different preconditioning mechanisms involving hydrodynamics and moist thermodynamics, which can lead to cyclogenesis. Here systematic asymptotic methods from applied mathematics are utilized to develop new simplified moist multi-scale models starting from the moist anelastic equations. Three interesting multi-scale models emerge in the analysis. The balanced mesoscale vortex (BMV) dynamics and the microscale balanced hot tower (BHT) dynamics involve simplified balanced equations without gravity waves for vertical vorticity amplification due to moist heatmore » sources and incorporate nonlinear advective fluxes across scales. The BMV model is the central one for tropical cyclogenesis in the embryo. The moist mesoscale wave (MMW) dynamics involves simplified equations for mesoscale moisture fluctuations, as well as linear hydrostatic waves driven by heat sources from moisture and eddy flux divergences. A simplified cloud physics model for deep convection is introduced here and used to study moist axisymmetric plumes in the BHT model. A simple application in periodic geometry involving the effects of mesoscale vertical shear and moist microscale hot towers on vortex amplification is developed here to illustrate features of the coupled multi-scale models. These results illustrate the use of these models in isolating key mechanisms in the embryo in a simplified content.« less
May, Christian P; Kolokotroni, Eleni; Stamatakos, Georgios S; Büchler, Philippe
2011-10-01
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning. Copyright © 2011 Elsevier Ltd. All rights reserved.
Findings from a Multi-Year Scale-up Effectiveness Trial of Everyday Mathematics
ERIC Educational Resources Information Center
Vaden-Kiernan, Michael; Borman, Geoffrey; Caverly, Sarah; Bell, Nance; Ruiz de Castilla, Veronica; Sullivan, Kate; Rodriguez, Debra
2015-01-01
This study addresses the effectiveness of "Everyday Mathematics" (EM), a widely used core mathematics curriculum that reflects over two decades of National Science Foundation (NSF)-sponsored research and development studies (Klein, 2007; National Research Council, 2004) and aligns well with recommended policies and practices by the…
A multi-objective programming model for assessment the GHG emissions in MSW management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavrotas, George, E-mail: mavrotas@chemeng.ntua.gr; Skoulaxinou, Sotiria; Gakis, Nikos
2013-09-15
Highlights: • The multi-objective multi-period optimization model. • The solution approach for the generation of the Pareto front with mathematical programming. • The very detailed description of the model (decision variables, parameters, equations). • The use of IPCC 2006 guidelines for landfill emissions (first order decay model) in the mathematical programming formulation. - Abstract: In this study a multi-objective mathematical programming model is developed for taking into account GHG emissions for Municipal Solid Waste (MSW) management. Mathematical programming models are often used for structure, design and operational optimization of various systems (energy, supply chain, processes, etc.). The last twenty yearsmore » they are used all the more often in Municipal Solid Waste (MSW) management in order to provide optimal solutions with the cost objective being the usual driver of the optimization. In our work we consider the GHG emissions as an additional criterion, aiming at a multi-objective approach. The Pareto front (Cost vs. GHG emissions) of the system is generated using an appropriate multi-objective method. This information is essential to the decision maker because he can explore the trade-offs in the Pareto curve and select his most preferred among the Pareto optimal solutions. In the present work a detailed multi-objective, multi-period mathematical programming model is developed in order to describe the waste management problem. Apart from the bi-objective approach, the major innovations of the model are (1) the detailed modeling considering 34 materials and 42 technologies, (2) the detailed calculation of the energy content of the various streams based on the detailed material balances, and (3) the incorporation of the IPCC guidelines for the CH{sub 4} generated in the landfills (first order decay model). The equations of the model are described in full detail. Finally, the whole approach is illustrated with a case study referring to the application of the model in a Greek region.« less
NASA Astrophysics Data System (ADS)
Vaiana, Michael; Muldoon, Sarah Feldt
2018-01-01
The field of neuroscience is facing an unprecedented expanse in the volume and diversity of available data. Traditionally, network models have provided key insights into the structure and function of the brain. With the advent of big data in neuroscience, both more sophisticated models capable of characterizing the increasing complexity of the data and novel methods of quantitative analysis are needed. Recently, multilayer networks, a mathematical extension of traditional networks, have gained increasing popularity in neuroscience due to their ability to capture the full information of multi-model, multi-scale, spatiotemporal data sets. Here, we review multilayer networks and their applications in neuroscience, showing how incorporating the multilayer framework into network neuroscience analysis has uncovered previously hidden features of brain networks. We specifically highlight the use of multilayer networks to model disease, structure-function relationships, network evolution, and link multi-scale data. Finally, we close with a discussion of promising new directions of multilayer network neuroscience research and propose a modified definition of multilayer networks designed to unite and clarify the use of the multilayer formalism in describing real-world systems.
Using CellML with OpenCMISS to Simulate Multi-Scale Physiology
Nickerson, David P.; Ladd, David; Hussan, Jagir R.; Safaei, Soroush; Suresh, Vinod; Hunter, Peter J.; Bradley, Christopher P.
2014-01-01
OpenCMISS is an open-source modeling environment aimed, in particular, at the solution of bioengineering problems. OpenCMISS consists of two main parts: a computational library (OpenCMISS-Iron) and a field manipulation and visualization library (OpenCMISS-Zinc). OpenCMISS is designed for the solution of coupled multi-scale, multi-physics problems in a general-purpose parallel environment. CellML is an XML format designed to encode biophysically based systems of ordinary differential equations and both linear and non-linear algebraic equations. A primary design goal of CellML is to allow mathematical models to be encoded in a modular and reusable format to aid reproducibility and interoperability of modeling studies. In OpenCMISS, we make use of CellML models to enable users to configure various aspects of their multi-scale physiological models. This avoids the need for users to be familiar with the OpenCMISS internal code in order to perform customized computational experiments. Examples of this are: cellular electrophysiology models embedded in tissue electrical propagation models; material constitutive relationships for mechanical growth and deformation simulations; time-varying boundary conditions for various problem domains; and fluid constitutive relationships and lumped-parameter models. In this paper, we provide implementation details describing how CellML models are integrated into multi-scale physiological models in OpenCMISS. The external interface OpenCMISS presents to users is also described, including specific examples exemplifying the extensibility and usability these tools provide the physiological modeling and simulation community. We conclude with some thoughts on future extension of OpenCMISS to make use of other community developed information standards, such as FieldML, SED-ML, and BioSignalML. Plans for the integration of accelerator code (graphical processing unit and field programmable gate array) generated from CellML models is also discussed. PMID:25601911
Trends in modeling Biomedical Complex Systems
Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro
2009-01-01
In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068
Progress in Mathematical Modeling of Gastrointestinal Slow Wave Abnormalities
Du, Peng; Calder, Stefan; Angeli, Timothy R.; Sathar, Shameer; Paskaranandavadivel, Niranchan; O'Grady, Gregory; Cheng, Leo K.
2018-01-01
Gastrointestinal (GI) motility is regulated in part by electrophysiological events called slow waves, which are generated by the interstitial cells of Cajal (ICC). Slow waves propagate by a process of “entrainment,” which occurs over a decreasing gradient of intrinsic frequencies in the antegrade direction across much of the GI tract. Abnormal initiation and conduction of slow waves have been demonstrated in, and linked to, a number of GI motility disorders. A range of mathematical models have been developed to study abnormal slow waves and applied to propose novel methods for non-invasive detection and therapy. This review provides a general outline of GI slow wave abnormalities and their recent classification using multi-electrode (high-resolution) mapping methods, with a particular emphasis on the spatial patterns of these abnormal activities. The recently-developed mathematical models are introduced in order of their biophysical scale from cellular to whole-organ levels. The modeling techniques, main findings from the simulations, and potential future directions arising from notable studies are discussed. PMID:29379448
2018-02-15
models and approaches are also valid using other invasive and non - invasive technologies. Finally, we illustrate and experimentally evaluate this...2017 Project Outline q Pattern formation diversity in wild microbial societies q Experimental and mathematical analysis methodology q Skeleton...chemotaxis, nutrient degradation, and the exchange of amino acids between cells. Using both quantitative experimental methods and several theoretical
Formulating a stand-growth model for mathematical programming problems in Appalachian forests
Gary W. Miller; Jay Sullivan
1993-01-01
Some growth and yield simulators applicable to central hardwood forests can be formulated for use in mathematical programming models that are designed to optimize multi-stand, multi-resource management problems. Once in the required format, growth equations serve as model constraints, defining the dynamics of stand development brought about by harvesting decisions. In...
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
ERIC Educational Resources Information Center
Campbell, William James
2017-01-01
This dissertation describes a mathematics curriculum and instruction design experiment involving a series of embodied mathematical activities conducted in two Colorado elementary schools Activities designed for this experiment include multi-scalar number line models focused on supporting students' understanding of elementary mathematics. Realistic…
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Kaiyala, Karl J
2014-01-01
Mathematical models for the dependence of energy expenditure (EE) on body mass and composition are essential tools in metabolic phenotyping. EE scales over broad ranges of body mass as a non-linear allometric function. When considered within restricted ranges of body mass, however, allometric EE curves exhibit 'local linearity.' Indeed, modern EE analysis makes extensive use of linear models. Such models typically involve one or two body mass compartments (e.g., fat free mass and fat mass). Importantly, linear EE models typically involve a non-zero (usually positive) y-intercept term of uncertain origin, a recurring theme in discussions of EE analysis and a source of confounding in traditional ratio-based EE normalization. Emerging linear model approaches quantify whole-body resting EE (REE) in terms of individual organ masses (e.g., liver, kidneys, heart, brain). Proponents of individual organ REE modeling hypothesize that multi-organ linear models may eliminate non-zero y-intercepts. This could have advantages in adjusting REE for body mass and composition. Studies reveal that individual organ REE is an allometric function of total body mass. I exploit first-order Taylor linearization of individual organ REEs to model the manner in which individual organs contribute to whole-body REE and to the non-zero y-intercept in linear REE models. The model predicts that REE analysis at the individual organ-tissue level will not eliminate intercept terms. I demonstrate that the parameters of a linear EE equation can be transformed into the parameters of the underlying 'latent' allometric equation. This permits estimates of the allometric scaling of EE in a diverse variety of physiological states that are not represented in the allometric EE literature but are well represented by published linear EE analyses.
Kirschner, Denise E; Linderman, Jennifer J
2009-04-01
In addition to traditional and novel experimental approaches to study host-pathogen interactions, mathematical and computer modelling have recently been applied to address open questions in this area. These modelling tools not only offer an additional avenue for exploring disease dynamics at multiple biological scales, but also complement and extend knowledge gained via experimental tools. In this review, we outline four examples where modelling has complemented current experimental techniques in a way that can or has already pushed our knowledge of host-pathogen dynamics forward. Two of the modelling approaches presented go hand in hand with articles in this issue exploring fluorescence resonance energy transfer and two-photon intravital microscopy. Two others explore virtual or 'in silico' deletion and depletion as well as a new method to understand and guide studies in genetic epidemiology. In each of these examples, the complementary nature of modelling and experiment is discussed. We further note that multi-scale modelling may allow us to integrate information across length (molecular, cellular, tissue, organism, population) and time (e.g. seconds to lifetimes). In sum, when combined, these compatible approaches offer new opportunities for understanding host-pathogen interactions.
Method and system to perform energy-extraction based active noise control
NASA Technical Reports Server (NTRS)
Kelkar, Atul (Inventor); Joshi, Suresh M. (Inventor)
2009-01-01
A method to provide active noise control to reduce noise and vibration in reverberant acoustic enclosures such as aircraft, vehicles, appliances, instruments, industrial equipment and the like is presented. A continuous-time multi-input multi-output (MIMO) state space mathematical model of the plant is obtained via analytical modeling and system identification. Compensation is designed to render the mathematical model passive in the sense of mathematical system theory. The compensated system is checked to ensure robustness of the passive property of the plant. The check ensures that the passivity is preserved if the mathematical model parameters are perturbed from nominal values. A passivity-based controller is designed and verified using numerical simulations and then tested. The controller is designed so that the resulting closed-loop response shows the desired noise reduction.
Kappler, Ulrike; Rowland, Susan L; Pedwell, Rhianna K
2017-05-01
Systems biology is frequently taught with an emphasis on mathematical modeling approaches. This focus effectively excludes most biology, biochemistry, and molecular biology students, who are not mathematics majors. The mathematical focus can also present a misleading picture of systems biology, which is a multi-disciplinary pursuit requiring collaboration between biochemists, bioinformaticians, and mathematicians. This article describes an authentic large-scale undergraduate research experience (ALURE) in systems biology that incorporates proteomics, bacterial genomics, and bioinformatics in the one exercise. This project is designed to engage students who have a basic grounding in protein chemistry and metabolism and no mathematical modeling skills. The pedagogy around the research experience is designed to help students attack complex datasets and use their emergent metabolic knowledge to make meaning from large amounts of raw data. On completing the ALURE, participants reported a significant increase in their confidence around analyzing large datasets, while the majority of the cohort reported good or great gains in a variety of skills including "analysing data for patterns" and "conducting database or internet searches." An environmental scan shows that this ALURE is the only undergraduate-level system-biology research project offered on a large-scale in Australia; this speaks to the perceived difficulty of implementing such an opportunity for students. We argue however, that based on the student feedback, allowing undergraduate students to complete a systems-biology project is both feasible and desirable, even if the students are not maths and computing majors. © 2016 by The International Union of Biochemistry and Molecular Biology, 45(3):235-248, 2017. © 2016 The International Union of Biochemistry and Molecular Biology.
Measuring Developmental Students' Mathematics Anxiety
ERIC Educational Resources Information Center
Ding, Yanqing
2016-01-01
This study conducted an item-level analysis of mathematics anxiety and examined the dimensionality of mathematics anxiety in a sample of developmental mathematics students (N = 162) by Multi-dimensional Random Coefficients Multinominal Logit Model (MRCMLM). The results indicate a moderately correlated factor structure of mathematics anxiety (r =…
NASA Astrophysics Data System (ADS)
Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup
2017-06-01
This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.
Progression to multi-scale models and the application to food system intervention strategies.
Gröhn, Yrjö T
2015-02-01
The aim of this article is to discuss how the systems science approach can be used to optimize intervention strategies in food animal systems. It advocates the idea that the challenges of maintaining a safe food supply are best addressed by integrating modeling and mathematics with biological studies critical to formulation of public policy to address these challenges. Much information on the biology and epidemiology of food animal systems has been characterized through single-discipline methods, but until now this information has not been thoroughly utilized in a fully integrated manner. The examples are drawn from our current research. The first, explained in depth, uses clinical mastitis to introduce the concept of dynamic programming to optimize management decisions in dairy cows (also introducing the curse of dimensionality problem). In the second example, a compartmental epidemic model for Johne's disease with different intervention strategies is optimized. The goal of the optimization strategy depends on whether there is a relationship between Johne's and Crohn's disease. If so, optimization is based on eradication of infection; if not, it is based on the cow's performance only (i.e., economic optimization, similar to the mastitis example). The third example focuses on food safety to introduce risk assessment using Listeria monocytogenes and Salmonella Typhimurium. The last example, practical interventions to effectively manage antibiotic resistance in beef and dairy cattle systems, introduces meta-population modeling that accounts for bacterial growth not only in the host (cow), but also in the cow's feed, drinking water and the housing environment. Each example stresses the need to progress toward multi-scale modeling. The article ends with examples of multi-scale systems, from food supply systems to Johne's disease. Reducing the consequences of foodborne illnesses (i.e., minimizing disease occurrence and associated costs) can only occur through an understanding of the system as a whole, including all its complexities. Thus the goal of future research should be to merge disciplines such as molecular biology, applied mathematics and social sciences to gain a better understanding of complex systems such as the food supply chain. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kuzle, A.
2018-06-01
The important role that metacognition plays as a predictor for student mathematical learning and for mathematical problem-solving, has been extensively documented. But only recently has attention turned to primary grades, and more research is needed at this level. The goals of this paper are threefold: (1) to present metacognitive framework during mathematics problem-solving, (2) to describe their multi-method interview approach developed to study student mathematical metacognition, and (3) to empirically evaluate the utility of their model and the adaptation of their approach in the context of grade 2 and grade 4 mathematics problem-solving. The results are discussed not only with regard to further development of the adapted multi-method interview approach, but also with regard to their theoretical and practical implications.
Discrete Mathematical Approaches to Graph-Based Traffic Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Cowley, Wendy E.; Hogan, Emilie A.
2014-04-01
Modern cyber defense and anlaytics requires general, formal models of cyber systems. Multi-scale network models are prime candidates for such formalisms, using discrete mathematical methods based in hierarchically-structured directed multigraphs which also include rich sets of labels. An exemplar of an application of such an approach is traffic analysis, that is, observing and analyzing connections between clients, servers, hosts, and actors within IP networks, over time, to identify characteristic or suspicious patterns. Towards that end, NetFlow (or more generically, IPFLOW) data are available from routers and servers which summarize coherent groups of IP packets flowing through the network. In thismore » paper, we consider traffic analysis of Netflow using both basic graph statistics and two new mathematical measures involving labeled degree distributions and time interval overlap measures. We do all of this over the VAST test data set of 96M synthetic Netflow graph edges, against which we can identify characteristic patterns of simulated ground-truth network attacks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kornreich, Drew E; Vaidya, Rajendra U; Ammerman, Curtt N
Integrated Computational Materials Engineering (ICME) is a novel overarching approach to bridge length and time scales in computational materials science and engineering. This approach integrates all elements of multi-scale modeling (including various empirical and science-based models) with materials informatics to provide users the opportunity to tailor material selections based on stringent application needs. Typically, materials engineering has focused on structural requirements (stress, strain, modulus, fracture toughness etc.) while multi-scale modeling has been science focused (mechanical threshold strength model, grain-size models, solid-solution strengthening models etc.). Materials informatics (mechanical property inventories) on the other hand, is extensively data focused. All of thesemore » elements are combined within the framework of ICME to create architecture for the development, selection and design new composite materials for challenging environments. We propose development of the foundations for applying ICME to composite materials development for nuclear and high-radiation environments (including nuclear-fusion energy reactors, nuclear-fission reactors, and accelerators). We expect to combine all elements of current material models (including thermo-mechanical and finite-element models) into the ICME framework. This will be accomplished through the use of a various mathematical modeling constructs. These constructs will allow the integration of constituent models, which in tum would allow us to use the adaptive strengths of using a combinatorial scheme (fabrication and computational) for creating new composite materials. A sample problem where these concepts are used is provided in this summary.« less
Mathematical modeling and full-scale shaking table tests for multi-curve buckling restrained braces
NASA Astrophysics Data System (ADS)
Tsai, C. S.; Lin, Yungchang; Chen, Wenshin; Su, H. C.
2009-09-01
Buckling restrained braces (BRBs) have been widely applied in seismic mitigation since they were introduced in the 1970s. However, traditional BRBs have several disadvantages caused by using a steel tube to envelope the mortar to prevent the core plate from buckling, such as: complex interfaces between the materials used, uncertain precision, and time consumption during the manufacturing processes. In this study, a new device called the multi-curve buckling restrained brace (MC-BRB) is proposed to overcome these disadvantages. The new device consists of a core plate with multiple neck portions assembled to form multiple energy dissipation segments, and the enlarged segment, lateral support elements and constraining elements to prevent the BRB from buckling. The enlarged segment located in the middle of the core plate can be welded to the lateral support and constraining elements to increase buckling resistance and to prevent them from sliding during earthquakes. Component tests and a series of shaking table tests on a full-scale steel structure equipped with MC-BRBs were carried out to investigate the behavior and capability of this new BRB design for seismic mitigation. The experimental results illustrate that the MC-BRB possesses a stable mechanical behavior under cyclic loadings and provides good protection to structures during earthquakes. Also, a mathematical model has been developed to simulate the mechanical characteristics of BRBs.
Multi-scale modeling of the CD8 immune response
NASA Astrophysics Data System (ADS)
Barbarroux, Loic; Michel, Philippe; Adimy, Mostafa; Crauste, Fabien
2016-06-01
During the primary CD8 T-Cell immune response to an intracellular pathogen, CD8 T-Cells undergo exponential proliferation and continuous differentiation, acquiring cytotoxic capabilities to address the infection and memorize the corresponding antigen. After cleaning the organism, the only CD8 T-Cells left are antigen-specific memory cells whose role is to respond stronger and faster in case they are presented this very same antigen again. That is how vaccines work: a small quantity of a weakened pathogen is introduced in the organism to trigger the primary response, generating corresponding memory cells in the process, giving the organism a way to defend himself in case it encounters the same pathogen again. To investigate this process, we propose a non linear, multi-scale mathematical model of the CD8 T-Cells immune response due to vaccination using a maturity structured partial differential equation. At the intracellular scale, the level of expression of key proteins is modeled by a delay differential equation system, which gives the speeds of maturation for each cell. The population of cells is modeled by a maturity structured equation whose speeds are given by the intracellular model. We focus here on building the model, as well as its asymptotic study. Finally, we display numerical simulations showing the model can reproduce the biological dynamics of the cell population for both the primary response and the secondary responses.
Multi-scale modeling of the CD8 immune response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbarroux, Loic, E-mail: loic.barbarroux@doctorant.ec-lyon.fr; Ecole Centrale de Lyon, 36 avenue Guy de Collongue, 69134 Ecully; Michel, Philippe, E-mail: philippe.michel@ec-lyon.fr
During the primary CD8 T-Cell immune response to an intracellular pathogen, CD8 T-Cells undergo exponential proliferation and continuous differentiation, acquiring cytotoxic capabilities to address the infection and memorize the corresponding antigen. After cleaning the organism, the only CD8 T-Cells left are antigen-specific memory cells whose role is to respond stronger and faster in case they are presented this very same antigen again. That is how vaccines work: a small quantity of a weakened pathogen is introduced in the organism to trigger the primary response, generating corresponding memory cells in the process, giving the organism a way to defend himself inmore » case it encounters the same pathogen again. To investigate this process, we propose a non linear, multi-scale mathematical model of the CD8 T-Cells immune response due to vaccination using a maturity structured partial differential equation. At the intracellular scale, the level of expression of key proteins is modeled by a delay differential equation system, which gives the speeds of maturation for each cell. The population of cells is modeled by a maturity structured equation whose speeds are given by the intracellular model. We focus here on building the model, as well as its asymptotic study. Finally, we display numerical simulations showing the model can reproduce the biological dynamics of the cell population for both the primary response and the secondary responses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lin; Dai, Zhenxue; Gong, Huili
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Modeling and optimization of Quality of Service routing in Mobile Ad hoc Networks
NASA Astrophysics Data System (ADS)
Rafsanjani, Marjan Kuchaki; Fatemidokht, Hamideh; Balas, Valentina Emilia
2016-01-01
Mobile ad hoc networks (MANETs) are a group of mobile nodes that are connected without using a fixed infrastructure. In these networks, nodes communicate with each other by forming a single-hop or multi-hop network. To design effective mobile ad hoc networks, it is important to evaluate the performance of multi-hop paths. In this paper, we present a mathematical model for a routing protocol under energy consumption and packet delivery ratio of multi-hop paths. In this model, we use geometric random graphs rather than random graphs. Our proposed model finds effective paths that minimize the energy consumption and maximizes the packet delivery ratio of the network. Validation of the mathematical model is performed through simulation.
Simulating and mapping spatial complexity using multi-scale techniques
De Cola, L.
1994-01-01
A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author
A systems theoretic approach to analysis and control of mammalian circadian dynamics
Abel, John H.; Doyle, Francis J.
2016-01-01
The mammalian circadian clock is a complex multi-scale, multivariable biological control system. In the past two decades, methods from systems engineering have led to numerous insights into the architecture and functionality of this system. In this review, we examine the mammalian circadian system through a process systems lens. We present a mathematical framework for examining the cellular circadian oscillator, and show recent extensions for understanding population-scale dynamics. We provide an overview of the routes by which the circadian system can be systemically manipulated, and present in silico proof of concept results for phase resetting of the clock via model predictive control. PMID:28496287
A theoretical study of the initiation, maintenance and termination of gastric slow wave re-entry.
Du, Peng; Paskaranandavadivel, Niranchan; O'Grady, Greg; Tang, Shou-Jiang; Cheng, Leo K
2015-12-01
Gastric slow wave dysrhythmias are associated with motility disorders. Periods of tachygastria associated with slow wave re-entry were recently recognized as one important dysrhythmia mechanism, but factors promoting and sustaining gastric re-entry are currently unknown. This study reports two experimental forms of gastric re-entry and presents a series of multi-scale models that define criteria for slow wave re-entry initiation, maintenance and termination. High-resolution electrical mapping was conducted in porcine and canine models and two spatiotemporal patterns of re-entrant activities were captured: single-loop rotor and double-loop figure-of-eight. Two separate multi-scale mathematical models were developed to reproduce the velocity and entrainment frequency of these experimental recordings. A single-pulse stimulus was used to invoke a rotor re-entry in the porcine model and a figure-of-eight re-entry in the canine model. In both cases, the simulated re-entrant activities were found to be perpetuated by tachygastria that was accompanied by a reduction in the propagation velocity in the re-entrant pathways. The simulated re-entrant activities were terminated by a single-pulse stimulus targeted at the tip of re-entrant wave, after which normal antegrade propagation was restored by the underlying intrinsic frequency gradient. (i) the stability of re-entry is regulated by stimulus timing, intrinsic frequency gradient and conductivity; (ii) tachygastria due to re-entry increases the frequency gradient while showing decreased propagation velocity; (iii) re-entry may be effectively terminated by a targeted stimulus at the core, allowing the intrinsic slow wave conduction system to re-establish itself. © The authors 2014. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
A Computational Model of the Rainbow Trout Hypothalamus-Pituitary-Ovary-Liver Axis
Gillies, Kendall; Krone, Stephen M.; Nagler, James J.; Schultz, Irvin R.
2016-01-01
Reproduction in fishes and other vertebrates represents the timely coordination of many endocrine factors that culminate in the production of mature, viable gametes. In recent years there has been rapid growth in understanding fish reproductive biology, which has been motivated in part by recognition of the potential effects that climate change, habitat destruction and contaminant exposure can have on natural and cultured fish populations. New approaches to understanding the impacts of these stressors are being developed that require a systems biology approach with more biologically accurate and detailed mathematical models. We have developed a multi-scale mathematical model of the female rainbow trout hypothalamus-pituitary-ovary-liver axis to use as a tool to help understand the functioning of the system and for extrapolation of laboratory findings of stressor impacts on specific components of the axis. The model describes the essential endocrine components of the female rainbow trout reproductive axis. The model also describes the stage specific growth of maturing oocytes within the ovary and permits the presence of sub-populations of oocytes at different stages of development. Model formulation and parametrization was largely based on previously published in vivo and in vitro data in rainbow trout and new data on the synthesis of gonadotropins in the pituitary. Model predictions were validated against several previously published data sets for annual changes in gonadotropins and estradiol in rainbow trout. Estimates of select model parameters can be obtained from in vitro assays using either quantitative (direct estimation of rate constants) or qualitative (relative change from control values) approaches. This is an important aspect of mathematical models as in vitro, cell-based assays are expected to provide the bulk of experimental data for future risk assessments and will require quantitative physiological models to extrapolate across biological scales. PMID:27096735
A Computational Model of the Rainbow Trout Hypothalamus-Pituitary-Ovary-Liver Axis.
Gillies, Kendall; Krone, Stephen M; Nagler, James J; Schultz, Irvin R
2016-04-01
Reproduction in fishes and other vertebrates represents the timely coordination of many endocrine factors that culminate in the production of mature, viable gametes. In recent years there has been rapid growth in understanding fish reproductive biology, which has been motivated in part by recognition of the potential effects that climate change, habitat destruction and contaminant exposure can have on natural and cultured fish populations. New approaches to understanding the impacts of these stressors are being developed that require a systems biology approach with more biologically accurate and detailed mathematical models. We have developed a multi-scale mathematical model of the female rainbow trout hypothalamus-pituitary-ovary-liver axis to use as a tool to help understand the functioning of the system and for extrapolation of laboratory findings of stressor impacts on specific components of the axis. The model describes the essential endocrine components of the female rainbow trout reproductive axis. The model also describes the stage specific growth of maturing oocytes within the ovary and permits the presence of sub-populations of oocytes at different stages of development. Model formulation and parametrization was largely based on previously published in vivo and in vitro data in rainbow trout and new data on the synthesis of gonadotropins in the pituitary. Model predictions were validated against several previously published data sets for annual changes in gonadotropins and estradiol in rainbow trout. Estimates of select model parameters can be obtained from in vitro assays using either quantitative (direct estimation of rate constants) or qualitative (relative change from control values) approaches. This is an important aspect of mathematical models as in vitro, cell-based assays are expected to provide the bulk of experimental data for future risk assessments and will require quantitative physiological models to extrapolate across biological scales.
CellML metadata standards, associated tools and repositories
Beard, Daniel A.; Britten, Randall; Cooling, Mike T.; Garny, Alan; Halstead, Matt D.B.; Hunter, Peter J.; Lawson, James; Lloyd, Catherine M.; Marsh, Justin; Miller, Andrew; Nickerson, David P.; Nielsen, Poul M.F.; Nomura, Taishin; Subramanium, Shankar; Wimalaratne, Sarala M.; Yu, Tommy
2009-01-01
The development of standards for encoding mathematical models is an important component of model building and model sharing among scientists interested in understanding multi-scale physiological processes. CellML provides such a standard, particularly for models based on biophysical mechanisms, and a substantial number of models are now available in the CellML Model Repository. However, there is an urgent need to extend the current CellML metadata standard to provide biological and biophysical annotation of the models in order to facilitate model sharing, automated model reduction and connection to biological databases. This paper gives a broad overview of a number of new developments on CellML metadata and provides links to further methodological details available from the CellML website. PMID:19380315
Iontophoretic transdermal drug delivery: a multi-layered approach.
Pontrelli, Giuseppe; Lauricella, Marco; Ferreira, José A; Pena, Gonçalo
2017-12-11
We present a multi-layer mathematical model to describe the transdermal drug release from an iontophoretic system. The Nernst-Planck equation describes the basic convection-diffusion process, with the electric potential obtained by solving the Laplace's equation. These equations are complemented with suitable interface and boundary conditions in a multi-domain. The stability of the mathematical problem is discussed in different scenarios and a finite-difference method is used to solve the coupled system. Numerical experiments are included to illustrate the drug dynamics under different conditions. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Kaiyala, Karl J.
2014-01-01
Mathematical models for the dependence of energy expenditure (EE) on body mass and composition are essential tools in metabolic phenotyping. EE scales over broad ranges of body mass as a non-linear allometric function. When considered within restricted ranges of body mass, however, allometric EE curves exhibit ‘local linearity.’ Indeed, modern EE analysis makes extensive use of linear models. Such models typically involve one or two body mass compartments (e.g., fat free mass and fat mass). Importantly, linear EE models typically involve a non-zero (usually positive) y-intercept term of uncertain origin, a recurring theme in discussions of EE analysis and a source of confounding in traditional ratio-based EE normalization. Emerging linear model approaches quantify whole-body resting EE (REE) in terms of individual organ masses (e.g., liver, kidneys, heart, brain). Proponents of individual organ REE modeling hypothesize that multi-organ linear models may eliminate non-zero y-intercepts. This could have advantages in adjusting REE for body mass and composition. Studies reveal that individual organ REE is an allometric function of total body mass. I exploit first-order Taylor linearization of individual organ REEs to model the manner in which individual organs contribute to whole-body REE and to the non-zero y-intercept in linear REE models. The model predicts that REE analysis at the individual organ-tissue level will not eliminate intercept terms. I demonstrate that the parameters of a linear EE equation can be transformed into the parameters of the underlying ‘latent’ allometric equation. This permits estimates of the allometric scaling of EE in a diverse variety of physiological states that are not represented in the allometric EE literature but are well represented by published linear EE analyses. PMID:25068692
NASA Astrophysics Data System (ADS)
Dündar, Furkan Semih
2018-01-01
We provide a theory of n-scales previously called as n dimensional time scales. In previous approaches to the theory of time scales, multi-dimensional scales were taken as product space of two time scales [1, 2]. n-scales make the mathematical structure more flexible and appropriate to real world applications in physics and related fields. Here we define an n-scale as an arbitrary closed subset of ℝn. Modified forward and backward jump operators, Δ-derivatives and Δ-integrals on n-scales are defined.
Nonlinear and Stochastic Dynamics in the Heart
Qu, Zhilin; Hu, Gang; Garfinkel, Alan; Weiss, James N.
2014-01-01
In a normal human life span, the heart beats about 2 to 3 billion times. Under diseased conditions, a heart may lose its normal rhythm and degenerate suddenly into much faster and irregular rhythms, called arrhythmias, which may lead to sudden death. The transition from a normal rhythm to an arrhythmia is a transition from regular electrical wave conduction to irregular or turbulent wave conduction in the heart, and thus this medical problem is also a problem of physics and mathematics. In the last century, clinical, experimental, and theoretical studies have shown that dynamical theories play fundamental roles in understanding the mechanisms of the genesis of the normal heart rhythm as well as lethal arrhythmias. In this article, we summarize in detail the nonlinear and stochastic dynamics occurring in the heart and their links to normal cardiac functions and arrhythmias, providing a holistic view through integrating dynamics from the molecular (microscopic) scale, to the organelle (mesoscopic) scale, to the cellular, tissue, and organ (macroscopic) scales. We discuss what existing problems and challenges are waiting to be solved and how multi-scale mathematical modeling and nonlinear dynamics may be helpful for solving these problems. PMID:25267872
Databases for multilevel biophysiology research available at Physiome.jp.
Asai, Yoshiyuki; Abe, Takeshi; Li, Li; Oka, Hideki; Nomura, Taishin; Kitano, Hiroaki
2015-01-01
Physiome.jp (http://physiome.jp) is a portal site inaugurated in 2007 to support model-based research in physiome and systems biology. At Physiome.jp, several tools and databases are available to support construction of physiological, multi-hierarchical, large-scale models. There are three databases in Physiome.jp, housing mathematical models, morphological data, and time-series data. In late 2013, the site was fully renovated, and in May 2015, new functions were implemented to provide information infrastructure to support collaborative activities for developing models and performing simulations within the database framework. This article describes updates to the databases implemented since 2013, including cooperation among the three databases, interactive model browsing, user management, version management of models, management of parameter sets, and interoperability with applications.
Pharmacokinetic analysis of multi PEG-theophylline conjugates.
Grassi, Mario; Bonora, Gian Maria; Drioli, Sara; Cateni, Francesca; Zacchigna, Marina
2012-10-01
In the attempt of prolonging the effect of drugs, a new branched, high-molecular weight multimeric poly(ethylene glycol) (MultiPEG), synthesized with a simple assembling procedure that devised the introduction of functional groups with divergent and selective reactivity, was employed as drug carrier. In particular, the attention was focused on the study of theophylline (THEO) and THEO-MultiPEG conjugates pharmacokinetic after oral administration in rabbit. Pharmacokinetic behavior was studied according to an ad hoc developed mathematical model accounting for THEO-MultiPEG in vivo absorption and decomposition into drug (THEO) and carrier (MultiPEG). The branched high-molecular weight MultiPEG proved to be a reliable drug delivery system able to prolong theophylline staying in the blood after oral administration of a THEO-MultiPEG solution. The analysis of experimental data by means of the developed mathematical model revealed that the prolongation of THEO effect was essentially due to the low THEO-MultiPEG permeability in comparison to that of pure THEO. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablonowski, Christiane
The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively withmore » advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.« less
NASA Astrophysics Data System (ADS)
Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.
2016-10-01
The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.
Microphysics in Multi-scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2012-01-01
Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.
2007-03-01
Chains," Mathematics of Control, Signals, and Systems, vol. 3(1), pp. 1-29, 1990. [4] A . Arnold, J . A . Carrillo, and I. Gamba, "Low and High Field...Aronson, C. L. A ., and J . L. Vázquez, "Interfaces with a corner point in one- dimensional porous medium flow," Comm. Pure Appl. Math, vol. 38(4), pp. 375...K. Levin, "Damage analysis of fiber composites," Computer Methods in Applied Mechanics and Engineering. [10] K. S. Barber, A . Goel, T. J . Graser, T
Schlüter, Daniela K; Ramis-Conde, Ignacio; Chaplain, Mark A J
2015-02-06
Studying the biophysical interactions between cells is crucial to understanding how normal tissue develops, how it is structured and also when malfunctions occur. Traditional experiments try to infer events at the tissue level after observing the behaviour of and interactions between individual cells. This approach assumes that cells behave in the same biophysical manner in isolated experiments as they do within colonies and tissues. In this paper, we develop a multi-scale multi-compartment mathematical model that accounts for the principal biophysical interactions and adhesion pathways not only at a cell-cell level but also at the level of cell colonies (in contrast to the traditional approach). Our results suggest that adhesion/separation forces between cells may be lower in cell colonies than traditional isolated single-cell experiments infer. As a consequence, isolated single-cell experiments may be insufficient to deduce important biological processes such as single-cell invasion after detachment from a solid tumour. The simulations further show that kinetic rates and cell biophysical characteristics such as pressure-related cell-cycle arrest have a major influence on cell colony patterns and can allow for the development of protrusive cellular structures as seen in invasive cancer cell lines independent of expression levels of pro-invasion molecules.
Schlüter, Daniela K.; Ramis-Conde, Ignacio; Chaplain, Mark A. J.
2015-01-01
Studying the biophysical interactions between cells is crucial to understanding how normal tissue develops, how it is structured and also when malfunctions occur. Traditional experiments try to infer events at the tissue level after observing the behaviour of and interactions between individual cells. This approach assumes that cells behave in the same biophysical manner in isolated experiments as they do within colonies and tissues. In this paper, we develop a multi-scale multi-compartment mathematical model that accounts for the principal biophysical interactions and adhesion pathways not only at a cell–cell level but also at the level of cell colonies (in contrast to the traditional approach). Our results suggest that adhesion/separation forces between cells may be lower in cell colonies than traditional isolated single-cell experiments infer. As a consequence, isolated single-cell experiments may be insufficient to deduce important biological processes such as single-cell invasion after detachment from a solid tumour. The simulations further show that kinetic rates and cell biophysical characteristics such as pressure-related cell-cycle arrest have a major influence on cell colony patterns and can allow for the development of protrusive cellular structures as seen in invasive cancer cell lines independent of expression levels of pro-invasion molecules. PMID:25519994
NASA Astrophysics Data System (ADS)
Li, Y.; Ma, X.; Su, N.
2013-12-01
The movement of water and solute into and through the vadose zone is, in essence, an issue of immiscible displacement in pore-space network of a soil. Therefore, multiphase flow and transport in porous media, referring to three medium: air, water, and the solute, pose one of the largest unresolved challenges for porous medium fluid seepage. However, this phenomenon has always been largely neglected. It is expected that a reliable analysis model of the multi-phase flow in soil can truly reflect the process of natural movement about the infiltration, which is impossible to be observed directly. In such cases, geophysical applications of the nuclear magnetic resonance (NMR) provides the opportunity to measure the water movements into soils directly over a large scale from tiny pore to regional scale, accordingly enable it available both on the laboratory and on the field. In addition, the NMR provides useful information about the pore space properties. In this study, we proposed both laboratory and field experiments to measure the multi-phase flow parameters, together with optimize the model in computer programming based on the fractional partial differential equations (fPDE). In addition, we establish, for the first time, an infiltration model including solute flowing with water, which has huge influence on agriculture and soil environment pollution. Afterwards, with data collected from experiments, we simulate the model and analyze the spatial variability of parameters. Simulations are also conducted according to the model to evaluate the effects of airflow on water infiltration and other effects such as solute and absorption. It has significant meaning to oxygen irrigation aiming to higher crop yield, and shed more light into the dam slope stability. In summary, our framework is a first-time model added in solute to have a mathematic analysis with the fPDE and more instructive to agriculture activities.
NASA Astrophysics Data System (ADS)
Lu, M.; Lall, U.
2013-12-01
In order to mitigate the impacts of climate change, proactive management strategies to operate reservoirs and dams are needed. A multi-time scale climate informed stochastic model is developed to optimize the operations for a multi-purpose single reservoir by simulating decadal, interannual, seasonal and sub-seasonal variability. We apply the model to a setting motivated by the largest multi-purpose dam in N. India, the Bhakhra reservoir on the Sutlej River, a tributary of the Indus. This leads to a focus on timing and amplitude of the flows for the monsoon and snowmelt periods. The flow simulations are constrained by multiple sources of historical data and GCM future projections, that are being developed through a NSF funded project titled 'Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoon Asia'. The model presented is a multilevel, nonlinear programming model that aims to optimize the reservoir operating policy on a decadal horizon and the operation strategy on an updated annual basis. The model is hierarchical, in terms of having a structure that two optimization models designated for different time scales are nested as a matryoshka doll. The two optimization models have similar mathematical formulations with some modifications to meet the constraints within that time frame. The first level of the model is designated to provide optimization solution for policy makers to determine contracted annual releases to different uses with a prescribed reliability; the second level is a within-the-period (e.g., year) operation optimization scheme that allocates the contracted annual releases on a subperiod (e.g. monthly) basis, with additional benefit for extra release and penalty for failure. The model maximizes the net benefit of irrigation, hydropower generation and flood control in each of the periods. The model design thus facilitates the consistent application of weather and climate forecasts to improve operations of reservoir systems. The decadal flow simulations are re-initialized every year with updated climate projections to improve the reliability of the operation rules for the next year, within which the seasonal operation strategies are nested. The multi-level structure can be repeated for monthly operation with weekly subperiods to take advantage of evolving weather forecasts and seasonal climate forecasts. As a result of the hierarchical structure, sub-seasonal even weather time scale updates and adjustment can be achieved. Given an ensemble of these scenarios, the McISH reservoir simulation-optimization model is able to derive the desired reservoir storage levels, including minimum and maximum, as a function of calendar date, and the associated release patterns. The multi-time scale approach allows adaptive management of water supplies acknowledging the changing risks, meeting both the objectives over the decade in expected value and controlling the near term and planning period risk through probabilistic reliability constraints. For the applications presented, the target season is the monsoon season from June to September. The model also includes a monthly flood volume forecast model, based on a Copula density fit to the monthly flow and the flood volume flow. This is used to guide dynamic allocation of the flood control volume given the forecasts.
ERIC Educational Resources Information Center
Rutherford, Teomara; Kibrick, Melissa; Burchinal, Margaret; Richland, Lindsey; Conley, AnneMarie; Osborne, Keara; Schneider, Stephanie; Duran, Lauren; Coulson, Andrew; Antenore, Fran; Daniels, Abby; Martinez, Michael E.
2010-01-01
This paper describes the background, methodology, preliminary findings, and anticipated future directions of a large-scale multi-year randomized field experiment addressing the efficacy of ST Math [Spatial-Temporal Math], a fully-developed math curriculum that uses interactive animated software. ST Math's unique approach minimizes the use of…
Alimohammadi, Mona; Pichardo-Almarza, Cesar; Agu, Obiekezie; Díaz-Zuccarini, Vanessa
2016-01-01
Vascular calcification results in stiffening of the aorta and is associated with hypertension and atherosclerosis. Atherogenesis is a complex, multifactorial, and systemic process; the result of a number of factors, each operating simultaneously at several spatial and temporal scales. The ability to predict sites of atherogenesis would be of great use to clinicians in order to improve diagnostic and treatment planning. In this paper, we present a mathematical model as a tool to understand why atherosclerotic plaque and calcifications occur in specific locations. This model is then used to analyze vascular calcification and atherosclerotic areas in an aortic dissection patient using a mechanistic, multi-scale modeling approach, coupling patient-specific, fluid-structure interaction simulations with a model of endothelial mechanotransduction. A number of hemodynamic factors based on state-of-the-art literature are used as inputs to the endothelial permeability model, in order to investigate plaque and calcification distributions, which are compared with clinical imaging data. A significantly improved correlation between elevated hydraulic conductivity or volume flux and the presence of calcification and plaques was achieved by using a shear index comprising both mean and oscillatory shear components (HOLMES) and a non-Newtonian viscosity model as inputs, as compared to widely used hemodynamic indicators. The proposed approach shows promise as a predictive tool. The improvements obtained using the combined biomechanical/biochemical modeling approach highlight the benefits of mechanistic modeling as a powerful tool to understand complex phenomena and provides insight into the relative importance of key hemodynamic parameters. PMID:27445834
Categorical prototyping: incorporating molecular mechanisms into 3D printing.
Brommer, Dieter B; Giesa, Tristan; Spivak, David I; Buehler, Markus J
2016-01-15
We apply the mathematical framework of category theory to articulate the precise relation between the structure and mechanics of a nanoscale system in a macroscopic domain. We maintain the chosen molecular mechanical properties from the nanoscale to the continuum scale. Therein we demonstrate a procedure to 'protoype a model', as category theory enables us to maintain certain information across disparate fields of study, distinct scales, or physical realizations. This process fits naturally with prototyping, as a prototype is not a complete product but rather a reduction to test a subset of properties. To illustrate this point, we use large-scale multi-material printing to examine the scaling of the elastic modulus of 2D carbon allotropes at the macroscale and validate our printed model using experimental testing. The resulting hand-held materials can be examined more readily, and yield insights beyond those available in the original digital representations. We demonstrate this concept by twisting the material, a test beyond the scope of the original model. The method developed can be extended to other methods of additive manufacturing.
Chiu, Yuan-Shyi Peter; Sung, Peng-Cheng; Chiu, Singa Wang; Chou, Chung-Li
2015-01-01
This study uses mathematical modeling to examine a multi-product economic manufacturing quantity (EMQ) model with an enhanced end items issuing policy and rework failures. We assume that a multi-product EMQ model randomly generates nonconforming items. All of the defective are reworked, but a certain portion fails and becomes scraps. When rework process ends and the entire lot of each product is quality assured, a cost reduction n + 1 end items issuing policy is used to transport finished items of each product. As a result, a closed-form optimal production cycle time is obtained. A numerical example demonstrates the practical usage of our result and confirms a significant savings in stock holding and overall production costs as compared to that of a prior work (Chiu et al. in J Sci Ind Res India, 72:435-440 2013) in the literature.
Gutierrez, Juan B.; Galinski, Mary R.; Cantrell, Stephen; Voit, Eberhard O.
2015-01-01
Since their earliest days, humans have been struggling with infectious diseases. Caused by viruses, bacteria, protozoa, or even higher organisms like worms, these diseases depend critically on numerous intricate interactions between parasites and hosts, and while we have learned much about these interactions, many details are still obscure. It is evident that the combined host-parasite dynamics constitutes a complex system that involves components and processes at multiple scales of time, space, and biological organization. At one end of this hierarchy we know of individual molecules that play crucial roles for the survival of a parasite or for the response and survival of its host. At the other end, one realizes that the spread of infectious diseases by far exceeds specific locales and, due to today's easy travel of hosts carrying a multitude of organisms, can quickly reach global proportions. The community of mathematical modelers has been addressing specific aspects of infectious diseases for a long time. Most of these efforts have focused on one or two select scales of a multi-level disease and used quite different computational approaches. This restriction to a molecular, physiological, or epidemiological level was prudent, as it has produced solid pillars of a foundation from which it might eventually be possible to launch comprehensive, multi-scale modeling efforts that make full use of the recent advances in biology and, in particular, the various high-throughput methodologies accompanying the emerging –omics revolution. This special issue contains contributions from biologists and modelers, most of whom presented and discussed their work at the workshop From within Host Dynamics to the Epidemiology of Infectious Disease, which was held at the Mathematical Biosciences Institute at Ohio State University in April 2014. These contributions highlight some of the forays into a deeper understanding of the dynamics between parasites and their hosts, and the consequences of this dynamics for the spread and treatment of infectious diseases. PMID:26474512
This paper proposes a general procedure to link meteorological data with air quality models, such as U.S. EPA's Models-3 Community Multi-scale Air Quality (CMAQ) modeling system. CMAQ is intended to be used for studying multi-scale (urban and regional) and multi-pollutant (ozon...
Multi-Scale Models for the Scale Interaction of Organized Tropical Convection
NASA Astrophysics Data System (ADS)
Yang, Qiu
Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.
Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.
2015-01-01
Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228
Hybrid stochastic simplifications for multiscale gene networks.
Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu
2009-09-07
Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.
Birkigt, Jan; Stumpp, Christine; Małoszewski, Piotr; Nijenhuis, Ivonne
2018-04-15
In recent years, constructed wetland systems have become into focus as means of cost-efficient organic contaminant management. Wetland systems provide a highly reactive environment in which several removal pathways of organic chemicals may be present at the same time; however, specific elimination processes and hydraulic conditions are usually separately investigated and thus not fully understood. The flow system in a three dimensional pilot-scale horizontal subsurface constructed wetland was investigated applying a multi-tracer test combined with a mathematical model to evaluate the flow and transport processes. The results indicate the existence of a multiple flow system with two distinct flow paths through the gravel bed and a preferential flow at the bottom transporting 68% of tracer mass resulting from the inflow design of the model wetland system. There the removal of main contaminant chlorobenzene was up to 52% based on different calculation approaches. Determined retention times in the range of 22d to 32.5d the wetland has a heterogeneous flow pattern. Differences between simulated and measured tracer concentrations in the upper sediment indicate diffusion dominated processes due to stagnant water zones. The tracer study combining experimental evaluation with mathematical modeling demonstrated the complexity of flow and transport processes in the constructed wetlands which need to be taken into account during interpretation of the determining attenuation processes. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-Scale Modeling in Morphogenesis: A Critical Analysis of the Cellular Potts Model
Voss-Böhme, Anja
2012-01-01
Cellular Potts models (CPMs) are used as a modeling framework to elucidate mechanisms of biological development. They allow a spatial resolution below the cellular scale and are applied particularly when problems are studied where multiple spatial and temporal scales are involved. Despite the increasing usage of CPMs in theoretical biology, this model class has received little attention from mathematical theory. To narrow this gap, the CPMs are subjected to a theoretical study here. It is asked to which extent the updating rules establish an appropriate dynamical model of intercellular interactions and what the principal behavior at different time scales characterizes. It is shown that the longtime behavior of a CPM is degenerate in the sense that the cells consecutively die out, independent of the specific interdependence structure that characterizes the model. While CPMs are naturally defined on finite, spatially bounded lattices, possible extensions to spatially unbounded systems are explored to assess to which extent spatio-temporal limit procedures can be applied to describe the emergent behavior at the tissue scale. To elucidate the mechanistic structure of CPMs, the model class is integrated into a general multiscale framework. It is shown that the central role of the surface fluctuations, which subsume several cellular and intercellular factors, entails substantial limitations for a CPM's exploitation both as a mechanistic and as a phenomenological model. PMID:22984409
ERIC Educational Resources Information Center
Hudesman, John; Crosby, Sara; Ziehmke, Niesha; Everson, Howard; Issac, Sharlene; Flugman, Bert; Zimmerman, Barry; Moylan, Adam
2014-01-01
The authors describe an Enhanced Formative Assessment and Self-Regulated Learning (EFA-SRL) program designed to improve the achievement of community college students enrolled in developmental mathematics courses. Their model includes the use of specially formatted quizzes designed to assess both the students' mathematics and metacognitive skill…
Nyman, Elin; Rozendaal, Yvonne J W; Helmlinger, Gabriel; Hamrén, Bengt; Kjellsson, Maria C; Strålfors, Peter; van Riel, Natal A W; Gennemark, Peter; Cedersund, Gunnar
2016-04-06
We are currently in the middle of a major shift in biomedical research: unprecedented and rapidly growing amounts of data may be obtained today, from in vitro, in vivo and clinical studies, at molecular, physiological and clinical levels. To make use of these large-scale, multi-level datasets, corresponding multi-level mathematical models are needed, i.e. models that simultaneously capture multiple layers of the biological, physiological and disease-level organization (also referred to as quantitative systems pharmacology-QSP-models). However, today's multi-level models are not yet embedded in end-usage applications, neither in drug research and development nor in the clinic. Given the expectations and claims made historically, this seemingly slow adoption may seem surprising. Therefore, we herein consider a specific example-type 2 diabetes-and critically review the current status and identify key remaining steps for these models to become mainstream in the future. This overview reveals how, today, we may use models to ask scientific questions concerning, e.g., the cellular origin of insulin resistance, and how this translates to the whole-body level and short-term meal responses. However, before these multi-level models can become truly useful, they need to be linked with the capabilities of other important existing models, in order to make them 'personalized' (e.g. specific to certain patient phenotypes) and capable of describing long-term disease progression. To be useful in drug development, it is also critical that the developed models and their underlying data and assumptions are easily accessible. For clinical end-usage, in addition, model links to decision-support systems combined with the engagement of other disciplines are needed to create user-friendly and cost-efficient software packages.
Active and Passive Hydrologic Tomographic Surveys:A Revolution in Hydrology (Invited)
NASA Astrophysics Data System (ADS)
Yeh, T. J.
2013-12-01
Mathematical forward or inverse problems of flow through geological media always have unique solutions if necessary conditions are givens. Unique mathematical solutions to forward or inverse modeling of field problems are however always uncertain (an infinite number of possibilities) due to many reasons. They include non-representativeness of the governing equations, inaccurate necessary conditions, multi-scale heterogeneity, scale discrepancies between observation and model, noise and others. Conditional stochastic approaches, which derives the unbiased solution and quantifies the solution uncertainty, are therefore most appropriate for forward and inverse modeling of hydrological processes. Conditioning using non-redundant data sets reduces uncertainty. In this presentation, we explain non-redundant data sets in cross-hole aquifer tests, and demonstrate that active hydraulic tomographic survey (using man-made excitations) is a cost-effective approach to collect the same type but non-redundant data sets for reducing uncertainty in the inverse modeling. We subsequently show that including flux measurements (a piece of non-redundant data set) collected in the same well setup as in hydraulic tomography improves the estimated hydraulic conductivity field. We finally conclude with examples and propositions regarding how to collect and analyze data intelligently by exploiting natural recurrent events (river stage fluctuations, earthquakes, lightning, etc.) as energy sources for basin-scale passive tomographic surveys. The development of information fusion technologies that integrate traditional point measurements and active/passive hydrogeophysical tomographic surveys, as well as advances in sensor, computing, and information technologies may ultimately advance our capability of characterizing groundwater basins to achieve resolution far beyond the feat of current science and technology.
Cunniffe, Nik J; Laranjeira, Francisco F; Neri, Franco M; DeSimone, R Erik; Gilligan, Christopher A
2014-08-01
A spatially-explicit, stochastic model is developed for Bahia bark scaling, a threat to citrus production in north-eastern Brazil, and is used to assess epidemiological principles underlying the cost-effectiveness of disease control strategies. The model is fitted via Markov chain Monte Carlo with data augmentation to snapshots of disease spread derived from a previously-reported multi-year experiment. Goodness-of-fit tests strongly supported the fit of the model, even though the detailed etiology of the disease is unknown and was not explicitly included in the model. Key epidemiological parameters including the infection rate, incubation period and scale of dispersal are estimated from the spread data. This allows us to scale-up the experimental results to predict the effect of the level of initial inoculum on disease progression in a typically-sized citrus grove. The efficacies of two cultural control measures are assessed: altering the spacing of host plants, and roguing symptomatic trees. Reducing planting density can slow disease spread significantly if the distance between hosts is sufficiently large. However, low density groves have fewer plants per hectare. The optimum density of productive plants is therefore recovered at an intermediate host spacing. Roguing, even when detection of symptomatic plants is imperfect, can lead to very effective control. However, scouting for disease symptoms incurs a cost. We use the model to balance the cost of scouting against the number of plants lost to disease, and show how to determine a roguing schedule that optimises profit. The trade-offs underlying the two optima we identify-the optimal host spacing and the optimal roguing schedule-are applicable to many pathosystems. Our work demonstrates how a carefully parameterised mathematical model can be used to find these optima. It also illustrates how mathematical models can be used in even this most challenging of situations in which the underlying epidemiology is ill-understood.
Eberhardt, Martin; Lai, Xin; Tomar, Namrata; Gupta, Shailendra; Schmeck, Bernd; Steinkasserer, Alexander; Schuler, Gerold; Vera, Julio
2016-01-01
The understanding of the immune response is right now at the center of biomedical research. There are growing expectations that immune-based interventions will in the midterm provide new, personalized, and targeted therapeutic options for many severe and highly prevalent diseases, from aggressive cancers to infectious and autoimmune diseases. To this end, immunology should surpass its current descriptive and phenomenological nature, and become quantitative, and thereby predictive.Immunology is an ideal field for deploying the tools, methodologies, and philosophy of systems biology, an approach that combines quantitative experimental data, computational biology, and mathematical modeling. This is because, from an organism-wide perspective, the immunity is a biological system of systems, a paradigmatic instance of a multi-scale system. At the molecular scale, the critical phenotypic responses of immune cells are governed by large biochemical networks, enriched in nested regulatory motifs such as feedback and feedforward loops. This network complexity confers them the ability of highly nonlinear behavior, including remarkable examples of homeostasis, ultra-sensitivity, hysteresis, and bistability. Moving from the cellular level, different immune cell populations communicate with each other by direct physical contact or receiving and secreting signaling molecules such as cytokines. Moreover, the interaction of the immune system with its potential targets (e.g., pathogens or tumor cells) is far from simple, as it involves a number of attack and counterattack mechanisms that ultimately constitute a tightly regulated multi-feedback loop system. From a more practical perspective, this leads to the consequence that today's immunologists are facing an ever-increasing challenge of integrating massive quantities from multi-platforms.In this chapter, we support the idea that the analysis of the immune system demands the use of systems-level approaches to ensure the success in the search for more effective and personalized immune-based therapies.
2008-07-01
operators in Hilbert spaces. The homogenization procedure through successive multi- resolution projections is presented, followed by a numerical example of...is intended to be essentially self-contained. The mathematical ( Greenberg 1978; Gilbert 2006) and signal processing (Strang and Nguyen 1995...literature listed in the references. The ideas behind multi-resolution analysis unfold from the theory of linear operators in Hilbert spaces (Davis 1975
ERIC Educational Resources Information Center
Dunn, Margaret Breslin
2009-01-01
A main question this dissertation addresses is: what variation in teaching and teacher training matter? This question is examined within a specific but important context: the scale-up of a technology-rich intervention focused on the algebra strand of 8th grade mathematics. I conducted a multi-level case study by gathering and analyzing data at…
A study of the parallel algorithm for large-scale DC simulation of nonlinear systems
NASA Astrophysics Data System (ADS)
Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel
Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.
Modeling and simulation of multi-physics multi-scale transport phenomenain bio-medical applications
NASA Astrophysics Data System (ADS)
Kenjereš, Saša
2014-08-01
We present a short overview of some of our most recent work that combines the mathematical modeling, advanced computer simulations and state-of-the-art experimental techniques of physical transport phenomena in various bio-medical applications. In the first example, we tackle predictions of complex blood flow patterns in the patient-specific vascular system (carotid artery bifurcation) and transfer of the so-called "bad" cholesterol (low-density lipoprotein, LDL) within the multi-layered artery wall. This two-way coupling between the blood flow and corresponding mass transfer of LDL within the artery wall is essential for predictions of regions where atherosclerosis can develop. It is demonstrated that a recently developed mathematical model, which takes into account the complex multi-layer arterial-wall structure, produced LDL profiles within the artery wall in good agreement with in-vivo experiments in rabbits, and it can be used for predictions of locations where the initial stage of development of atherosclerosis may take place. The second example includes a combination of pulsating blood flow and medical drug delivery and deposition controlled by external magnetic field gradients in the patient specific carotid artery bifurcation. The results of numerical simulations are compared with own PIV (Particle Image Velocimetry) and MRI (Magnetic Resonance Imaging) in the PDMS (silicon-based organic polymer) phantom. A very good agreement between simulations and experiments is obtained for different stages of the pulsating cycle. Application of the magnetic drug targeting resulted in an increase of up to ten fold in the efficiency of local deposition of the medical drug at desired locations. Finally, the LES (Large Eddy Simulation) of the aerosol distribution within the human respiratory system that includes up to eight bronchial generations is performed. A very good agreement between simulations and MRV (Magnetic Resonance Velocimetry) measurements is obtained. Magnetic steering of aerosols towards the left or right part of lungs proved to be possible, which can open new strategies for medical treatment of respiratory diseases.
Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav
2015-03-06
The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.
Schwartz, Benjamin L.; Yin, Ziying; Yaşar, Temel K.; Liu, Yifei; Khan, Altaf A.; Ye, Allen Q.; Royston, Thomas J.; Magin, Richard L.
2016-01-01
Aim The focus of this paper is to report on the design and construction of a multiply connected phantom for use in magnetic resonance elasography (MRE)–an imaging technique that allows for the non-invasive visualization of the displacement field throughout an object from externally driven harmonic motion–as well as its inverse modeling with a closed-form analytic solution which is derived herein from first principles. Methods Mathematically, the phantom is described as two infinite concentric circular cylinders with unequal complex shear moduli, harmonically vibrated at the exterior surface in a direction along their common axis. Each concentric cylinder is made of a hydrocolloid with its own specific solute concentration. They are assembled in a multi-step process for which custom scaffolding was designed and built. A customized spin-echo based MR elastography sequence with a sinusoidal motion-sensitizing gradient was used for data acquisition on a 9.4 T Agilent small-animal MR scanner. Complex moduli obtained from the inverse model are used to solve the forward problem with a finite element method. Results Both complex shear moduli show a significant frequency dependence (p < 0.001) in keeping with previous work. Conclusion The novel multiply connected phantom and mathematical model are validated as a viable tool for MRE studies. Significance On a small enough scale much of physiology can be mathematically modeled with basic geometric shapes, e.g. a cylinder representing a blood vessel. This work demonstrates the possibility of elegant mathematical analysis of phantoms specifically designed and carefully constructed for biomedical MRE studies. PMID:26886963
NASA Astrophysics Data System (ADS)
Jansen, Daniel J.
Teacher efficacy continues to be an important area of study in educational research. This study tested an instrument designed to assess the perceived efficacy of agricultural education teachers when engaged in lessons involving mathematics instruction. The study population of Oregon and Washington agricultural educators utilized in the validation of the instrument revealed important demographic findings and specific results related to teacher efficacy for the study population. An instrument was developed from the assimilation of three scales previously used and validated in efficacy research. Participants' mathematics teaching efficacy was assessed using a portion of the Mathematics Teaching Efficacy Beliefs Instrument (MTEBI), and personal mathematics efficacy was evaluated by the mathematics self-belief instrument which was derived from the Betz and Hackett's Mathematics Self-Efficacy Scale. The final scale, the Teachers' Sense of Efficacy Scale (TSES) created by Tschannen-Moran and Woolfolk Hoy, examined perceived personal teaching efficacy. Structural equation modeling was used as the statistical analyses tool to validate the instrument and examine correlations between efficacy constructs used to determine potential professional development needs of the survey population. As part of the data required for validation of the Mathematics Enhancement Teaching Efficacy instrument, demographic information defining the population of Oregon and Washington agricultural educators was obtained and reported. A hypothetical model derived from teacher efficacy literature was found to be an acceptable model to verify construct validity and determine strength of correlations between the scales that defined the instrument. The instrument produced an alpha coefficient of .905 for reliability. Both exploratory and confirmatory factor analyses were used to verify construct and discriminate validity. Specifics results related to the survey population of agricultural educators concluded that personal mathematics efficacy has a stronger correlation with mathematics teaching efficacy than personal teaching efficacy of teachers for this population. The implications of such findings suggest that professional development and pre-service preparation should be more focused on mathematics content knowledge rather than pedagogical knowledge when the objective is to enhance mathematics in interdisciplinary lessons.
Meng, Qing-chun; Rong, Xiao-xia; Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi
2016-01-01
CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996-2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated.
Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi
2016-01-01
CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996–2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated. PMID:27010658
Optimized planning methodologies of ASON implementation
NASA Astrophysics Data System (ADS)
Zhou, Michael M.; Tamil, Lakshman S.
2005-02-01
Advanced network planning concerns effective network-resource allocation for dynamic and open business environment. Planning methodologies of ASON implementation based on qualitative analysis and mathematical modeling are presented in this paper. The methodology includes method of rationalizing technology and architecture, building network and nodal models, and developing dynamic programming for multi-period deployment. The multi-layered nodal architecture proposed here can accommodate various nodal configurations for a multi-plane optical network and the network modeling presented here computes the required network elements for optimizing resource allocation.
NASA Astrophysics Data System (ADS)
Shaw, Jeremy A.; Daescu, Dacian N.
2017-08-01
This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.
Sigala, Rodrigo; Haufe, Sebastian; Roy, Dipanjan; Dinse, Hubert R.; Ritter, Petra
2014-01-01
During the past two decades growing evidence indicates that brain oscillations in the alpha band (~10 Hz) not only reflect an “idle” state of cortical activity, but also take a more active role in the generation of complex cognitive functions. A recent study shows that more than 60% of the observed inter-subject variability in perceptual learning can be ascribed to ongoing alpha activity. This evidence indicates a significant role of alpha oscillations for perceptual learning and hence motivates to explore the potential underlying mechanisms. Hence, it is the purpose of this review to highlight existent evidence that ascribes intrinsic alpha oscillations a role in shaping our ability to learn. In the review, we disentangle the alpha rhythm into different neural signatures that control information processing within individual functional building blocks of perceptual learning. We further highlight computational studies that shed light on potential mechanisms regarding how alpha oscillations may modulate information transfer and connectivity changes relevant for learning. To enable testing of those model based hypotheses, we emphasize the need for multidisciplinary approaches combining assessment of behavior and multi-scale neuronal activity, active modulation of ongoing brain states and computational modeling to reveal the mathematical principles of the complex neuronal interactions. In particular we highlight the relevance of multi-scale modeling frameworks such as the one currently being developed by “The Virtual Brain” project. PMID:24772077
Hybrid stochastic simplifications for multiscale gene networks
Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu
2009-01-01
Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554
Microphysics in the Multi-Scale Modeling Systems with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.
A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes
NASA Astrophysics Data System (ADS)
Tao, W. K.
2017-12-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.
Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to use of the multi-satellite simulator tqimproy precipitation processes will be discussed.
Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei--Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2010-01-01
In recent years, exponentially increasing computer power extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 sq km in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale models can be run in grid size similar to cloud resolving models through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model). (2) a regional scale model (a NASA unified weather research and forecast, W8F). (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling systems to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use the multi-satellite simulator to improve precipitation processes will be discussed.
Using Multi-Scale Modeling Systems to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2010-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.
Dynamic cellular manufacturing system considering machine failure and workload balance
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad
2018-02-01
Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.
NASA Technical Reports Server (NTRS)
Davis, Brynmor; Kim, Edward; Piepmeier, Jeffrey; Hildebrand, Peter H. (Technical Monitor)
2001-01-01
Many new Earth remote-sensing instruments are embracing both the advantages and added complexity that result from interferometric or fully polarimetric operation. To increase instrument understanding and functionality a model of the signals these instruments measure is presented. A stochastic model is used as it recognizes the non-deterministic nature of any real-world measurements while also providing a tractable mathematical framework. A stationary, Gaussian-distributed model structure is proposed. Temporal and spectral correlation measures provide a statistical description of the physical properties of coherence and polarization-state. From this relationship the model is mathematically defined. The model is shown to be unique for any set of physical parameters. A method of realizing the model (necessary for applications such as synthetic calibration-signal generation) is given and computer simulation results are presented. The signals are constructed using the output of a multi-input multi-output linear filter system, driven with white noise.
Diffusion-Based Design of Multi-Layered Ophthalmic Lenses for Controlled Drug Release
Pimenta, Andreia F. R.; Serro, Ana Paula; Paradiso, Patrizia; Saramago, Benilde
2016-01-01
The study of ocular drug delivery systems has been one of the most covered topics in drug delivery research. One potential drug carrier solution is the use of materials that are already commercially available in ophthalmic lenses for the correction of refractive errors. In this study, we present a diffusion-based mathematical model in which the parameters can be adjusted based on experimental results obtained under controlled conditions. The model allows for the design of multi-layered therapeutic ophthalmic lenses for controlled drug delivery. We show that the proper combination of materials with adequate drug diffusion coefficients, thicknesses and interfacial transport characteristics allows for the control of the delivery of drugs from multi-layered ophthalmic lenses, such that drug bursts can be minimized, and the release time can be maximized. As far as we know, this combination of a mathematical modelling approach with experimental validation of non-constant activity source lamellar structures, made of layers of different materials, accounting for the interface resistance to the drug diffusion, is a novel approach to the design of drug loaded multi-layered contact lenses. PMID:27936138
NASA Astrophysics Data System (ADS)
Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille
2017-12-01
Multiagent systems (MAS) provide a useful tool for exploring the complex dynamics and behavior of financial markets and now MAS approach has been widely implemented and documented in the empirical literature. This paper introduces the implementation of an innovative multi-scale mathematical model for a computational agent-based financial market. The paper develops a method to quantify the degree of self-organization which emerges in the system and shows that the capacity of self-organization is maximized when the agent behaviors are heterogeneous. Numerical results are presented and analyzed, showing how the global market behavior emerges from specific individual behavior interactions.
Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Schröter, Kai; Merz, Bruno
2016-05-01
Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.
Modeling Flow in Porous Media with Double Porosity/Permeability.
NASA Astrophysics Data System (ADS)
Seyed Joodat, S. H.; Nakshatrala, K. B.; Ballarini, R.
2016-12-01
Although several continuum models are available to study the flow of fluids in porous media with two pore-networks [1], they lack a firm theoretical basis. In this poster presentation, we will present a mathematical model with firm thermodynamic basis and a robust computational framework for studying flow in porous media that exhibit double porosity/permeability. The mathematical model will be derived by appealing to the maximization of rate of dissipation hypothesis, which ensures that the model is in accord with the second law of thermodynamics. We will also present important properties that the solutions under the model satisfy, along with an analytical solution procedure based on the Green's function method. On the computational front, a stabilized mixed finite element formulation will be derived based on the variational multi-scale formalism. The equal-order interpolation, which is computationally the most convenient, is stable under this formulation. The performance of this formulation will be demonstrated using patch tests, numerical convergence study, and representative problems. It will be shown that the pressure and velocity profiles under the double porosity/permeability model are qualitatively and quantitatively different from the corresponding ones under the classical Darcy equations. Finally, it will be illustrated that the surface pore-structure is not sufficient in characterizing the flow through a complex porous medium, which pitches a case for using advanced characterization tools like micro-CT. References [1] G. I. Barenblatt, I. P. Zheltov, and I. N. Kochina, "Basic concepts in the theory of seepage of homogeneous liquids in fissured rocks [strata]," Journal of Applied Mathematics and Mechanics, vol. 24, pp. 1286-1303, 1960.
On dependency properties of the ISIs generated by a two-compartmental neuronal model.
Benedetto, Elisa; Sacerdote, Laura
2013-02-01
One-dimensional leaky integrate and fire neuronal models describe interspike intervals (ISIs) of a neuron as a renewal process and disregarding the neuron geometry. Many multi-compartment models account for the geometrical features of the neuron but are too complex for their mathematical tractability. Leaky integrate and fire two-compartment models seem a good compromise between mathematical tractability and an improved realism. They indeed allow to relax the renewal hypothesis, typical of one-dimensional models, without introducing too strong mathematical difficulties. Here, we pursue the analysis of the two-compartment model studied by Lansky and Rodriguez (Phys D 132:267-286, 1999), aiming of introducing some specific mathematical results used together with simulation techniques. With the aid of these methods, we investigate dependency properties of ISIs for different values of the model parameters. We show that an increase of the input increases the strength of the dependence between successive ISIs.
A Multi-scale Cognitive Approach to Intrusion Detection and Response
2015-12-28
the behavior of the traffic on the network, either by using mathematical formulas or by replaying packet streams. As a result, simulators depend...large scale. Summary of the most important results We obtained a powerful machine, which has 768 cores and 1.25 TB memory . RBG has been...time. Each client is configured with 1GB memory , 10 GB disk space, and one 100M Ethernet interface. The server nodes include web servers
A Multi-Scale Approach to Airway Hyperresponsiveness: From Molecule to Organ
Lauzon, Anne-Marie; Bates, Jason H. T.; Donovan, Graham; Tawhai, Merryn; Sneyd, James; Sanderson, Michael J.
2012-01-01
Airway hyperresponsiveness (AHR), a characteristic of asthma that involves an excessive reduction in airway caliber, is a complex mechanism reflecting multiple processes that manifest over a large range of length and time scales. At one extreme, molecular interactions determine the force generated by airway smooth muscle (ASM). At the other, the spatially distributed constriction of the branching airways leads to breathing difficulties. Similarly, asthma therapies act at the molecular scale while clinical outcomes are determined by lung function. These extremes are linked by events operating over intermediate scales of length and time. Thus, AHR is an emergent phenomenon that limits our understanding of asthma and confounds the interpretation of studies that address physiological mechanisms over a limited range of scales. A solution is a modular computational model that integrates experimental and mathematical data from multiple scales. This includes, at the molecular scale, kinetics, and force production of actin-myosin contractile proteins during cross-bridge and latch-state cycling; at the cellular scale, Ca2+ signaling mechanisms that regulate ASM force production; at the tissue scale, forces acting between contracting ASM and opposing viscoelastic tissue that determine airway narrowing; at the organ scale, the topographic distribution of ASM contraction dynamics that determine mechanical impedance of the lung. At each scale, models are constructed with iterations between theory and experimentation to identify the parameters that link adjacent scales. This modular model establishes algorithms for modeling over a wide range of scales and provides a framework for the inclusion of other responses such as inflammation or therapeutic regimes. The goal is to develop this lung model so that it can make predictions about bronchoconstriction and identify the pathophysiologic mechanisms having the greatest impact on AHR and its therapy. PMID:22701430
NASA Astrophysics Data System (ADS)
Bai, Wei-wei; Ren, Jun-sheng; Li, Tie-shan
2018-06-01
This paper explores a highly accurate identification modeling approach for the ship maneuvering motion with fullscale trial. A multi-innovation gradient iterative (MIGI) approach is proposed to optimize the distance metric of locally weighted learning (LWL), and a novel non-parametric modeling technique is developed for a nonlinear ship maneuvering system. This proposed method's advantages are as follows: first, it can avoid the unmodeled dynamics and multicollinearity inherent to the conventional parametric model; second, it eliminates the over-learning or underlearning and obtains the optimal distance metric; and third, the MIGI is not sensitive to the initial parameter value and requires less time during the training phase. These advantages result in a highly accurate mathematical modeling technique that can be conveniently implemented in applications. To verify the characteristics of this mathematical model, two examples are used as the model platforms to study the ship maneuvering.
NASA Astrophysics Data System (ADS)
Anku, Sitsofe E.
1997-09-01
Using the reform documents of the National Council of Teachers of Mathematics (NCTM) (NCTM, 1989, 1991, 1995), a theory-based multi-dimensional assessment framework (the "SEA" framework) which should help expand the scope of assessment in mathematics is proposed. This framework uses a context based on mathematical reasoning and has components that comprise mathematical concepts, mathematical procedures, mathematical communication, mathematical problem solving, and mathematical disposition.
Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia
2016-09-01
In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
Brankaer, Carmen; Ghesquière, Pol; De Smedt, Bert
2017-08-01
The ability to compare symbolic numerical magnitudes correlates with children's concurrent and future mathematics achievement. We developed and evaluated a quick timed paper-and-pencil measure that can easily be used, for example in large-scale research, in which children have to cross out the numerically larger of two Arabic one- and two-digit numbers (SYMP Test). We investigated performance on this test in 1,588 primary school children (Grades 1-6) and examined in each grade its associations with mathematics achievement. The SYMP Test had satisfactory test-retest reliability. The SYMP Test showed significant and stable correlations with mathematics achievement for both one-digit and two-digit comparison, across all grades. This replicates the previously observed association between symbolic numerical magnitude processing and mathematics achievement, but extends it by showing that the association is observed in all grades in primary education and occurs for single- as well as multi-digit processing. Children with mathematical learning difficulties performed significantly lower on one-digit comparison and two-digit comparison in all grades. This all suggests satisfactory construct and criterion-related validity of the SYMP Test, which can be used in research, when performing large-scale (intervention) studies, and by practitioners, as screening measure to identify children at risk for mathematical difficulties or dyscalculia.
Multiscale modelling and nonlinear simulation of vascular tumour growth
Macklin, Paul; Anderson, Alexander R. A.; Chaplain, Mark A. J.; Cristini, Vittorio
2011-01-01
In this article, we present a new multiscale mathematical model for solid tumour growth which couples an improved model of tumour invasion with a model of tumour-induced angiogenesis. We perform nonlinear simulations of the multi-scale model that demonstrate the importance of the coupling between the development and remodeling of the vascular network, the blood flow through the network and the tumour progression. Consistent with clinical observations, the hydrostatic stress generated by tumour cell proliferation shuts down large portions of the vascular network dramatically affecting the flow, the subsequent network remodeling, the delivery of nutrients to the tumour and the subsequent tumour progression. In addition, extracellular matrix degradation by tumour cells is seen to have a dramatic affect on both the development of the vascular network and the growth response of the tumour. In particular, the newly developing vessels tend to encapsulate, rather than penetrate, the tumour and are thus less effective in delivering nutrients. PMID:18781303
Prototype Biology-Based Radiation Risk Module Project
NASA Technical Reports Server (NTRS)
Terrier, Douglas; Clayton, Ronald G.; Patel, Zarana; Hu, Shaowen; Huff, Janice
2015-01-01
Biological effects of space radiation and risk mitigation are strategic knowledge gaps for the Evolvable Mars Campaign. The current epidemiology-based NASA Space Cancer Risk (NSCR) model contains large uncertainties (HAT #6.5a) due to lack of information on the radiobiology of galactic cosmic rays (GCR) and lack of human data. The use of experimental models that most accurately replicate the response of human tissues is critical for precision in risk projections. Our proposed study will compare DNA damage, histological, and cell kinetic parameters after irradiation in normal 2D human cells versus 3D tissue models, and it will use a multi-scale computational model (CHASTE) to investigate various biological processes that may contribute to carcinogenesis, including radiation-induced cellular signaling pathways. This cross-disciplinary work, with biological validation of an evolvable mathematical computational model, will help reduce uncertainties within NSCR and aid risk mitigation for radiation-induced carcinogenesis.
Multi-scale modeling in cell biology
Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick
2009-01-01
Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808
Chiu, Yuan-Shyi Peter; Chou, Chung-Li; Chang, Huei-Hsin; Chiu, Singa Wang
2016-01-01
A multi-customer finite production rate (FPR) model with quality assurance and discontinuous delivery policy was investigated in a recent paper (Chiu et al. in J Appl Res Technol 12(1):5-13, 2014) using differential calculus approach. This study employs mathematical modeling along with a two-phase algebraic method to resolve such a specific multi-customer FPR model. As a result, the optimal replenishment lot size and number of shipments can be derived without using the differential calculus. Such a straightforward method may assist practitioners who with insufficient knowledge of calculus in learning and managing the real multi-customer FPR systems more effectively.
Using the Partial Credit Model to Evaluate the Student Engagement in Mathematics Scale
ERIC Educational Resources Information Center
Leis, Micela; Schmidt, Karen M.; Rimm-Kaufman, Sara E.
2015-01-01
The Student Engagement in Mathematics Scale (SEMS) is a self-report measure that was created to assess three dimensions of student engagement (social, emotional, and cognitive) in mathematics based on a single day of class. In the current study, the SEMS was administered to a sample of 360 fifth graders from a large Mid-Atlantic district. The…
Multiple Scales in Fluid Dynamics and Meteorology: The DFG Priority Programme 1276 MetStröm
NASA Astrophysics Data System (ADS)
von Larcher, Th; Klein, R.
2012-04-01
Geophysical fluid motions are characterized by a very wide range of length and time scales, and by a rich collection of varying physical phenomena. The mathematical description of these motions reflects this multitude of scales and mechanisms in that it involves strong non-linearities and various scale-dependent singular limit regimes. Considerable progress has been made in recent years in the mathematical modelling and numerical simulation of such flows in detailed process studies, numerical weather forecasting, and climate research. One task of outstanding importance in this context has been and will remain for the foreseeable future the subgrid scale parameterization of the net effects of non-resolved processes that take place on spacio-temporal scales not resolvable even by the largest most recent supercomputers. Since the advent of numerical weather forecasting some 60 years ago, one simple but efficient means to achieve improved forecasting skills has been increased spacio-temporal resolution. This seems quite consistent with the concept of convergence of numerical methods in Applied Mathematics and Computational Fluid Dynamics (CFD) at a first glance. Yet, the very notion of increased resolution in atmosphere-ocean science is very different from the one used in Applied Mathematics: For the mathematician, increased resolution provides the benefit of getting closer to the ideal of a converged solution of some given partial differential equations. On the other hand, the atmosphere-ocean scientist would naturally refine the computational grid and adjust his mathematical model, such that it better represents the relevant physical processes that occur at smaller scales. This conceptual contradiction remains largely irrelevant as long as geophysical flow models operate with fixed computational grids and time steps and with subgrid scale parameterizations being optimized accordingly. The picture changes fundamentally when modern techniques from CFD involving spacio-temporal grid adaptivity get invoked in order to further improve the net efficiency in exploiting the given computational resources. In the setting of geophysical flow simulation one must then employ subgrid scale parameterizations that dynamically adapt to the changing grid sizes and time steps, implement ways to judiciously control and steer the newly available flexibility of resolution, and invent novel ways of quantifying the remaining errors. The DFG priority program MetStröm covers the expertise of Meteorology, Fluid Dynamics, and Applied Mathematics to develop model- as well as grid-adaptive numerical simulation concepts in multidisciplinary projects. The goal of this priority programme is to provide simulation models which combine scale-dependent (mathematical) descriptions of key physical processes with adaptive flow discretization schemes. Deterministic continuous approaches and discrete and/or stochastic closures and their possible interplay are taken into consideration. Research focuses on the theory and methodology of multiscale meteorological-fluid mechanics modelling. Accompanying reference experiments support model validation.
Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey
2016-01-01
Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...
Dick, Thomas E.; Molkov, Yaroslav I.; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J.; Doyle, John; Scheff, Jeremy D.; Calvano, Steve E.; Androulakis, Ioannis P.; An, Gary; Vodovotz, Yoram
2012-01-01
Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma. PMID:22783197
Dick, Thomas E; Molkov, Yaroslav I; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J; Doyle, John; Scheff, Jeremy D; Calvano, Steve E; Androulakis, Ioannis P; An, Gary; Vodovotz, Yoram
2012-01-01
Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma.
Multi-Scale Computational Models for Electrical Brain Stimulation
Seo, Hyeon; Jun, Sung C.
2017-01-01
Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476
Quinn, T. Alexander; Kohl, Peter
2013-01-01
Since the development of the first mathematical cardiac cell model 50 years ago, computational modelling has become an increasingly powerful tool for the analysis of data and for the integration of information related to complex cardiac behaviour. Current models build on decades of iteration between experiment and theory, representing a collective understanding of cardiac function. All models, whether computational, experimental, or conceptual, are simplified representations of reality and, like tools in a toolbox, suitable for specific applications. Their range of applicability can be explored (and expanded) by iterative combination of ‘wet’ and ‘dry’ investigation, where experimental or clinical data are used to first build and then validate computational models (allowing integration of previous findings, quantitative assessment of conceptual models, and projection across relevant spatial and temporal scales), while computational simulations are utilized for plausibility assessment, hypotheses-generation, and prediction (thereby defining further experimental research targets). When implemented effectively, this combined wet/dry research approach can support the development of a more complete and cohesive understanding of integrated biological function. This review illustrates the utility of such an approach, based on recent examples of multi-scale studies of cardiac structure and mechano-electric function. PMID:23334215
Echevarria, Desarae; Gutfraind, Alexander; Boodram, Basmattee; ...
2015-08-21
New direct-acting antivirals (DAAs) provide an opportunity to combat hepatitis C virus (HCV) infection in persons who inject drugs (PWID). In our paper, we use a mathematical model to predict the impact of a DAA-treatment scale-up on HCV prevalence among PWID and the estimated cost in metropolitan Chicago.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Echevarria, Desarae; Gutfraind, Alexander; Boodram, Basmattee
New direct-acting antivirals (DAAs) provide an opportunity to combat hepatitis C virus (HCV) infection in persons who inject drugs (PWID). In our paper, we use a mathematical model to predict the impact of a DAA-treatment scale-up on HCV prevalence among PWID and the estimated cost in metropolitan Chicago.
NASA Astrophysics Data System (ADS)
Schulz, Wolfgang; Hermanns, Torsten; Al Khawli, Toufik
2017-07-01
Decision making for competitive production in high-wage countries is a daily challenge where rational and irrational methods are used. The design of decision making processes is an intriguing, discipline spanning science. However, there are gaps in understanding the impact of the known mathematical and procedural methods on the usage of rational choice theory. Following Benjamin Franklin's rule for decision making formulated in London 1772, he called "Prudential Algebra" with the meaning of prudential reasons, one of the major ingredients of Meta-Modelling can be identified finally leading to one algebraic value labelling the results (criteria settings) of alternative decisions (parameter settings). This work describes the advances in Meta-Modelling techniques applied to multi-dimensional and multi-criterial optimization by identifying the persistence level of the corresponding Morse-Smale Complex. Implementations for laser cutting and laser drilling are presented, including the generation of fast and frugal Meta-Models with controlled error based on mathematical model reduction Reduced Models are derived to avoid any unnecessary complexity. Both, model reduction and analysis of multi-dimensional parameter space are used to enable interactive communication between Discovery Finders and Invention Makers. Emulators and visualizations of a metamodel are introduced as components of Virtual Production Intelligence making applicable the methods of Scientific Design Thinking and getting the developer as well as the operator more skilled.
Forest and Agricultural Sector Optimization Model Greenhouse Gas Version (FASOM-GHG)
FASOM-GHG is a dynamic, multi-period, intertemporal, price-endogenous, mathematical programming model depicting land transfers and other resource allocations between and within the agricultural and forest sectors in the US. The model solution portrays simultaneous market equilibr...
Mathematical model for logarithmic scaling of velocity fluctuations in wall turbulence.
Mouri, Hideaki
2015-12-01
For wall turbulence, moments of velocity fluctuations are known to be logarithmic functions of the height from the wall. This logarithmic scaling is due to the existence of a characteristic velocity and to the nonexistence of any characteristic height in the range of the scaling. By using the mathematics of random variables, we obtain its necessary and sufficient conditions. They are compared with characteristics of a phenomenological model of eddies attached to the wall and also with those of the logarithmic scaling of the mean velocity.
NASA Astrophysics Data System (ADS)
Polosin, A. N.; Chistyakova, T. B.
2018-05-01
In this article, the authors describe mathematical modeling of polymer processing in extruders of various types used in extrusion and calender productions of film materials. The method consists of the synthesis of a static model for calculating throughput, energy consumption of the extruder, extrudate quality indices, as well as a dynamic model for evaluating polymer residence time in the extruder, on which the quality indices depend. Models are adjusted according to the extruder type (single-screw, reciprocating, twin-screw), its screw and head configuration, extruder’s work temperature conditions, and the processed polymer type. Models enable creating extruder screw configurations and determining extruder controlling action values that provide the extrudate of required quality while satisfying extruder throughput and energy consumption requirements. Model adequacy has been verified using polyolefins’ and polyvinylchloride processing data in different extruders. The program complex, based on mathematical models, has been developed in order to control extruders of various types in order to ensure resource and energy saving in multi-assortment productions of polymeric films. Using the program complex in the control system for the extrusion stage of the polymeric film productions enables improving film quality, reducing spoilage, lessening the time required for production line change-over to other throughput and film type assignment.
Ammari, Habib; Boulier, Thomas; Garnier, Josselin; Wang, Han
2017-01-31
Understanding active electrolocation in weakly electric fish remains a challenging issue. In this article we propose a mathematical formulation of this problem, in terms of partial differential equations. This allows us to detail two algorithms: one for localizing a target using the multi-frequency aspect of the signal, and another one for identifying the shape of this target. Shape recognition is designed in a machine learning point of view, and takes advantage of both the multi-frequency setup and the movement of the fish around its prey. Numerical simulations are shown for the computation of the electric field emitted and sensed by the fish; they are then used as an input for the two algorithms.
Scaling for Dynamical Systems in Biology.
Ledder, Glenn
2017-11-01
Asymptotic methods can greatly simplify the analysis of all but the simplest mathematical models and should therefore be commonplace in such biological areas as ecology and epidemiology. One essential difficulty that limits their use is that they can only be applied to a suitably scaled dimensionless version of the original dimensional model. Many books discuss nondimensionalization, but with little attention given to the problem of choosing the right scales and dimensionless parameters. In this paper, we illustrate the value of using asymptotics on a properly scaled dimensionless model, develop a set of guidelines that can be used to make good scaling choices, and offer advice for teaching these topics in differential equations or mathematical biology courses.
DOT National Transportation Integrated Search
2014-02-01
A mathematical model was developed for the purpose of providing students with data : acquisition and engine modeling experience at the University of Idaho. In developing the : model, multiple heat transfer and emissions models were researched and com...
Collaboration of Miniature Multi-Modal Mobile Smart Robots over a Network
2015-08-14
theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The views...theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The...independently evolving research directions based on physics-based models of mechanical, electromechanical and electronic devices, operational constraints
Quantitative reconstructions in multi-modal photoacoustic and optical coherence tomography imaging
NASA Astrophysics Data System (ADS)
Elbau, P.; Mindrinos, L.; Scherzer, O.
2018-01-01
In this paper we perform quantitative reconstruction of the electric susceptibility and the Grüneisen parameter of a non-magnetic linear dielectric medium using measurement of a multi-modal photoacoustic and optical coherence tomography system. We consider the mathematical model presented in Elbau et al (2015 Handbook of Mathematical Methods in Imaging ed O Scherzer (New York: Springer) pp 1169-204), where a Fredholm integral equation of the first kind for the Grüneisen parameter was derived. For the numerical solution of the integral equation we consider a Galerkin type method.
Parallel multiscale simulations of a brain aneurysm
Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em
2012-01-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work. PMID:23734066
Parallel multiscale simulations of a brain aneurysm.
Grinberg, Leopold; Fedosov, Dmitry A; Karniadakis, George Em
2013-07-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr . The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.
Simeonov, Plamen L
2017-12-01
The goal of this paper is to advance an extensible theory of living systems using an approach to biomathematics and biocomputation that suitably addresses self-organized, self-referential and anticipatory systems with multi-temporal multi-agents. Our first step is to provide foundations for modelling of emergent and evolving dynamic multi-level organic complexes and their sustentative processes in artificial and natural life systems. Main applications are in life sciences, medicine, ecology and astrobiology, as well as robotics, industrial automation, man-machine interface and creative design. Since 2011 over 100 scientists from a number of disciplines have been exploring a substantial set of theoretical frameworks for a comprehensive theory of life known as Integral Biomathics. That effort identified the need for a robust core model of organisms as dynamic wholes, using advanced and adequately computable mathematics. The work described here for that core combines the advantages of a situation and context aware multivalent computational logic for active self-organizing networks, Wandering Logic Intelligence (WLI), and a multi-scale dynamic category theory, Memory Evolutive Systems (MES), hence WLIMES. This is presented to the modeller via a formal augmented reality language as a first step towards practical modelling and simulation of multi-level living systems. Initial work focuses on the design and implementation of this visual language and calculus (VLC) and its graphical user interface. The results will be integrated within the current methodology and practices of theoretical biology and (personalized) medicine to deepen and to enhance the holistic understanding of life. Copyright © 2017 Elsevier B.V. All rights reserved.
Suryawanshi, Gajendra W.; Hoffmann, Alexander
2015-01-01
Human immunodeficiency virus-1 (HIV-1) employs accessory proteins to evade innate immune responses by neutralizing the anti-viral activity of host restriction factors. Apolipoprotein B mRNA-editing enzyme 3G (APOBEC3G, A3G) and bone marrow stromal cell antigen 2 (BST2) are host resistance factors that potentially inhibit HIV-1 infection. BST2 reduces viral production by tethering budding HIV-1 particles to virus producing cells, while A3G inhibits the reverse transcription (RT) process and induces viral genome hypermutation through cytidine deamination, generating fewer replication competent progeny virus. Two HIV-1 proteins counter these cellular restriction factors: Vpu, which reduces surface BST2, and Vif, which degrades cellular A3G. The contest between these host and viral proteins influences whether HIV-1 infection is established and progresses towards AIDS. In this work, we present an age-structured multi-scale viral dynamics model of in vivo HIV-1 infection. We integrated the intracellular dynamics of anti-viral activity of the host factors and their neutralization by HIV-1 accessory proteins into the virus/cell population dynamics model. We calculate the basic reproductive ratio (Ro) as a function of host-viral protein interaction coefficients, and numerically simulated the multi-scale model to understand HIV-1 dynamics following host factor-induced perturbations. We found that reducing the influence of Vpu triggers a drop in Ro, revealing the impact of BST2 on viral infection control. Reducing Vif’s effect reveals the restrictive efficacy of A3G in blocking RT and in inducing lethal hypermutations, however, neither of these factors alone is sufficient to fully restrict HIV-1 infection. Interestingly, our model further predicts that BST2 and A3G function synergistically, and delineates their relative contribution in limiting HIV-1 infection and disease progression. We provide a robust modeling framework for devising novel combination therapies that target HIV-1 accessory proteins and boost antiviral activity of host factors. PMID:26385832
NASA Astrophysics Data System (ADS)
Aalaei, Amin; Davoudpour, Hamid
2012-11-01
This article presents designing a new mathematical model for integrating dynamic cellular manufacturing into supply chain system with an extensive coverage of important manufacturing features consideration of multiple plants location, multi-markets allocation, multi-period planning horizons with demand and part mix variation, machine capacity, and the main constraints are demand of markets satisfaction in each period, machine availability, machine time-capacity, worker assignment, available time of worker, production volume for each plant and the amounts allocated to each market. The aim of the proposed model is to minimize holding and outsourcing costs, inter-cell material handling cost, external transportation cost, procurement & maintenance and overhead cost of machines, setup cost, reconfiguration cost of machines installation and removal, hiring, firing and salary worker costs. Aimed to prove the potential benefits of such a design, presented an example is shown using a proposed model.
Snyder, Jessica; Son, Ae Rin; Hamid, Qudus; Wu, Honglu; Sun, Wei
2016-01-13
Bottom-up tissue engineering requires methodological progress of biofabrication to capture key design facets of anatomical arrangements across micro, meso and macro-scales. The diffusive mass transfer properties necessary to elicit stability and functionality require hetero-typic contact, cell-to-cell signaling and uniform nutrient diffusion. Bioprinting techniques successfully build mathematically defined porous architecture to diminish resistance to mass transfer. Current limitations of bioprinted cell assemblies include poor micro-scale formability of cell-laden soft gels and asymmetrical macro-scale diffusion through 3D volumes. The objective of this work is to engineer a synchronized multi-material bioprinter (SMMB) system which improves the resolution and expands the capability of existing bioprinting systems by packaging multiple cell types in heterotypic arrays prior to deposition. This unit cell approach to arranging multiple cell-laden solutions is integrated with a motion system to print heterogeneous filaments as tissue engineered scaffolds and nanoliter droplets. The set of SMMB process parameters control the geometric arrangement of the combined flow's internal features and constituent material's volume fractions. SMMB printed hepatocyte-endothelial laden 200 nl droplets are cultured in a rotary cell culture system (RCCS) to study the effect of microgravity on an in vitro model of the human hepatic lobule. RCCS conditioning for 48 h increased hepatocyte cytoplasm diameter 2 μm, increased metabolic rate, and decreased drug half-life. SMMB hetero-cellular models present a 10-fold increase in metabolic rate, compared to SMMB mono-culture models. Improved bioprinting resolution due to process control of cell-laden matrix packaging as well as nanoliter droplet printing capability identify SMMB as a viable technique to improve in vitro model efficacy.
Math Snacks: Using Animations and Games to Fill the Gaps in Mathematics
ERIC Educational Resources Information Center
Valdiz, Alfred; Trujillo, Karen; Wiburg, Karin
2013-01-01
Math Snacks animations and support materials were developed for use on the web and mobile technologies to teach ratio, proportion, scale factor, and number line concepts using a multi-modal approach. Included in Math Snacks are: Animations which promote the visualization of a concept image; written lessons which provide cognitive complexity for…
Multi-model approach to characterize human handwriting motion.
Chihi, I; Abdelkrim, A; Benrejeb, M
2016-02-01
This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.
CellML and associated tools and techniques.
Garny, Alan; Nickerson, David P; Cooper, Jonathan; Weber dos Santos, Rodrigo; Miller, Andrew K; McKeever, Steve; Nielsen, Poul M F; Hunter, Peter J
2008-09-13
We have, in the last few years, witnessed the development and availability of an ever increasing number of computer models that describe complex biological structures and processes. The multi-scale and multi-physics nature of these models makes their development particularly challenging, not only from a biological or biophysical viewpoint but also from a mathematical and computational perspective. In addition, the issue of sharing and reusing such models has proved to be particularly problematic, with the published models often lacking information that is required to accurately reproduce the published results. The International Union of Physiological Sciences Physiome Project was launched in 1997 with the aim of tackling the aforementioned issues by providing a framework for the modelling of the human body. As part of this initiative, the specifications of the CellML mark-up language were released in 2001. Now, more than 7 years later, the time has come to assess the situation, in particular with regard to the tools and techniques that are now available to the modelling community. Thus, after introducing CellML, we review and discuss existing editors, validators, online repository, code generators and simulation environments, as well as the CellML Application Program Interface. We also address possible future directions including the need for additional mark-up languages.
Mathematical modeling of nitrous oxide (N2O) emissions from full-scale wastewater treatment plants.
Ni, Bing-Jie; Ye, Liu; Law, Yingyu; Byers, Craig; Yuan, Zhiguo
2013-07-16
Mathematical modeling of N2O emissions is of great importance toward understanding the whole environmental impact of wastewater treatment systems. However, information on modeling of N2O emissions from full-scale wastewater treatment plants (WWTP) is still sparse. In this work, a mathematical model based on currently known or hypothesized metabolic pathways for N2O productions by heterotrophic denitrifiers and ammonia-oxidizing bacteria (AOB) is developed and calibrated to describe the N2O emissions from full-scale WWTPs. The model described well the dynamic ammonium, nitrite, nitrate, dissolved oxygen (DO) and N2O data collected from both an open oxidation ditch (OD) system with surface aerators and a sequencing batch reactor (SBR) system with bubbling aeration. The obtained kinetic parameters for N2O production are found to be reasonable as the 95% confidence regions of the estimates are all small with mean values approximately at the center. The model is further validated with independent data sets collected from the same two WWTPs. This is the first time that mathematical modeling of N2O emissions is conducted successfully for full-scale WWTPs. While clearly showing that the NH2OH related pathways could well explain N2O production and emission in the two full-scale plants studied, the modeling results do not prove the dominance of the NH2OH pathways in these plants, nor rule out the possibility of AOB denitrification being a potentially dominating pathway in other WWTPs that are designed or operated differently.
Theoretical Foundations of Study of Cartography
NASA Astrophysics Data System (ADS)
Talhofer, Václav; Hošková-Mayerová, Šárka
2018-05-01
Cartography and geoinformatics are technical-based fields which deal with modelling and visualization of landscape in the form of a map. The theoretical foundation is necessary to obtain during study of cartography and geoinformatics based mainly on mathematics. For the given subjects, mathematics is necessary for understanding of many procedures that are connected to modelling of the Earth as a celestial body, to ways of its projection into a plane, to methods and procedures of modelling of landscape and phenomena in society and visualization of these models in the form of electronic as well as classic paper maps. Not only general mathematics, but also its extension of differential geometry of curves and surfaces, ways of approximation of lines and surfaces of functional surfaces, mathematical statistics and multi-criterial analyses seem to be suitable and necessary. Underestimation of the significance of mathematical education in cartography and geoinformatics is inappropriate and lowers competence of cartographers and professionals in geographic information science and technology to solve problems.
Biomaterial science meets computational biology.
Hutmacher, Dietmar W; Little, J Paige; Pettet, Graeme J; Loessner, Daniela
2015-05-01
There is a pressing need for a predictive tool capable of revealing a holistic understanding of fundamental elements in the normal and pathological cell physiology of organoids in order to decipher the mechanoresponse of cells. Therefore, the integration of a systems bioengineering approach into a validated mathematical model is necessary to develop a new simulation tool. This tool can only be innovative by combining biomaterials science with computational biology. Systems-level and multi-scale experimental data are incorporated into a single framework, thus representing both single cells and collective cell behaviour. Such a computational platform needs to be validated in order to discover key mechano-biological factors associated with cell-cell and cell-niche interactions.
Multi-scale habitat selection modeling: A review and outlook
Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman
2016-01-01
Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.
Multi-Decadal Oscillations of the Ocean Active Upper-Layer Heat Content
NASA Astrophysics Data System (ADS)
Byshev, Vladimir I.; Neiman, Victor G.; Anisimov, Mikhail V.; Gusev, Anatoly V.; Serykh, Ilya V.; Sidorova, Alexandra N.; Figurkin, Alexander L.; Anisimov, Ivan M.
2017-07-01
Spatial patterns in multi-decadal variability in upper ocean heat content for the last 60 years are examined using a numerical model developed at the Institute of Numerical Mathematics of Russia (INM Model) and sea water temperature-salinity data from the World Ocean Database (in: Levitus, NOAA Atlas NESDIS 66, U.S. Wash.: Gov. Printing Office, 2009). Both the model and the observational data show that the heat content of the Active Upper Layer (AUL) in particular regions of the Atlantic, Pacific and Southern oceans have experienced prominent simultaneous variations on multi-decadal (25-35 years) time scales. These variations are compared earlier revealed climatic alternations in the Northern Atlantic region during the last century (Byshev et al. in Doklady Earth Sci 438(2):887-892, 2011). We found that from the middle of 1970s to the end of 1990s the AUL heat content decreased in several oceanic regions, while the mean surface temperature increased on Northern Hemisphere continents according to IPCC (in: Stocker et al. Contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change, Cambridge University Press, Cambridge, 2013). This means that the climate-forcing effect of the ocean-atmosphere interaction in certain energy-active areas determines not only local climatic processes, but also have an influence on global-scale climate phenomena. Here we show that specific regional features of the AUL thermal structure are in a good agreement with climatic conditions on the adjacent continents. Further, the ocean AUL in the five distinctive regions identified in our study have resumed warming in the first decade of this century. By analogy inference from previous climate scenarios, this may signal the onset of more continental climate over mainlands.
A mathematical model for active contraction in healthy and failing myocytes and left ventricles.
Cai, Li; Wang, Yongheng; Gao, Hao; Li, Yiqiang; Luo, Xiaoyu
2017-01-01
Cardiovascular disease is one of the leading causes of death worldwide, in particular myocardial dysfunction, which may lead to heart failure eventually. Understanding the electro-mechanics of the heart will help in developing more effective clinical treatments. In this paper, we present a multi-scale electro-mechanics model of the left ventricle (LV). The Holzapfel-Ogden constitutive law was used to describe the passive myocardial response in tissue level, a modified Grandi-Pasqualini-Bers model was adopted to model calcium dynamics in individual myocytes, and the active tension was described using the Niederer-Hunter-Smith myofilament model. We first studied the electro-mechanics coupling in a single myocyte in the healthy and diseased left ventricle, and then the single cell model was embedded in a dynamic LV model to investigate the compensation mechanism of LV pump function due to myocardial dysfunction caused by abnormality in cellular calcium dynamics. The multi-scale LV model was solved using an in-house developed hybrid immersed boundary method with finite element extension. The predictions of the healthy LV model agreed well with the clinical measurements and other studies, and likewise, the results in the failing states were also consistent with clinical observations. In particular, we found that a low level of intracellular Ca2+ transient in myocytes can result in LV pump function failure even with increased myocardial contractility, decreased systolic blood pressure, and increased diastolic filling pressure, even though they will increase LV stroke volume. Our work suggested that treatments targeted at increased contractility and lowering the systolic blood pressure alone are not sufficient in preventing LV pump dysfunction, restoring a balanced physiological Ca2+ handling mechanism is necessary.
Multiscale modeling and simulation of brain blood flow
NASA Astrophysics Data System (ADS)
Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em
2016-02-01
The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process taking place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.
Multiphase flow models for hydraulic fracturing technology
NASA Astrophysics Data System (ADS)
Osiptsov, Andrei A.
2017-10-01
The technology of hydraulic fracturing of a hydrocarbon-bearing formation is based on pumping a fluid with particles into a well to create fractures in porous medium. After the end of pumping, the fractures filled with closely packed proppant particles create highly conductive channels for hydrocarbon flow from far-field reservoir to the well to surface. The design of the hydraulic fracturing treatment is carried out with a simulator. Those simulators are based on mathematical models, which need to be accurate and close to physical reality. The entire process of fracture placement and flowback/cleanup can be conventionally split into the following four stages: (i) quasi-steady state effectively single-phase suspension flow down the wellbore, (ii) particle transport in an open vertical fracture, (iii) displacement of fracturing fluid by hydrocarbons from the closed fracture filled with a random close pack of proppant particles, and, finally, (iv) highly transient gas-liquid flow in a well during cleanup. The stage (i) is relatively well described by the existing hydralics models, while the models for the other three stages of the process need revisiting and considerable improvement, which was the focus of the author’s research presented in this review paper. For stage (ii), we consider the derivation of a multi-fluid model for suspension flow in a narrow vertical hydraulic fracture at moderate Re on the scale of fracture height and length and also the migration of particles across the flow on the scale of fracture width. At the stage of fracture cleanaup (iii), a novel multi-continua model for suspension filtration is developed. To provide closure relationships for permeability of proppant packings to be used in this model, a 3D direct numerical simulation of single phase flow is carried out using the lattice-Boltzmann method. For wellbore cleanup (iv), we present a combined 1D model for highly-transient gas-liquid flow based on the combination of multi-fluid and drift-flux approaches. The derivation of the drift-flux model from conservation olaws is criticall revisited in order to define the list of underlying assumptions and to mark the applicability margins of the model. All these fundamental problems share the same technological application (hydraulic fracturing) and the same method of research, namely, the multi-fluid approach to multiphase flow modeling and the consistent use of asymptotic methods. Multi-fluid models are then discussed in comparison with semi-empirical (often postulated) models widely used in the industry.
ERIC Educational Resources Information Center
Dunekacke, Simone; Jenßen, Lars; Eilerts, Katja; Blömeke, Sigrid
2016-01-01
Teacher competence is a multi-dimensional construct that includes beliefs as well as knowledge. The present study investigated the structure of prospective preschool teachers' mathematics-related beliefs and their relation to content knowledge and pedagogical content knowledge. In addition, prospective preschool teachers' perception and planning…
Multi-scale signed envelope inversion
NASA Astrophysics Data System (ADS)
Chen, Guo-Xin; Wu, Ru-Shan; Wang, Yu-Qing; Chen, Sheng-Chang
2018-06-01
Envelope inversion based on modulation signal mode was proposed to reconstruct large-scale structures of underground media. In order to solve the shortcomings of conventional envelope inversion, multi-scale envelope inversion was proposed using new envelope Fréchet derivative and multi-scale inversion strategy to invert strong contrast models. In multi-scale envelope inversion, amplitude demodulation was used to extract the low frequency information from envelope data. However, only to use amplitude demodulation method will cause the loss of wavefield polarity information, thus increasing the possibility of inversion to obtain multiple solutions. In this paper we proposed a new demodulation method which can contain both the amplitude and polarity information of the envelope data. Then we introduced this demodulation method into multi-scale envelope inversion, and proposed a new misfit functional: multi-scale signed envelope inversion. In the numerical tests, we applied the new inversion method to the salt layer model and SEG/EAGE 2-D Salt model using low-cut source (frequency components below 4 Hz were truncated). The results of numerical test demonstrated the effectiveness of this method.
On uncertainty quantification in hydrogeology and hydrogeophysics
NASA Astrophysics Data System (ADS)
Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud
2017-12-01
Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.
Modelling and control system of multi motor conveyor
NASA Astrophysics Data System (ADS)
Kovalchuk, M. S.; Baburin, S. V.
2018-03-01
The paper deals with the actual problem of developing the mathematical model of electromechanical system: conveyor – multimotor electric drive with a frequency converter, with the implementation in Simulink/MatLab, which allows one to perform studies of conveyor operation modes, taking into account the specifics of the mechanism with different electric drives control algorithms. The authors designed the mathematical models of the conveyor and its control system that provides increased uniformity of load distribution between drive motors and restriction of dynamic loads on the belt (over-regulation until 15%).
Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide
2014-06-01
The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.
Pohlheim, Hartmut
2006-01-01
Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).
Vickers, T. Winston; Ernest, Holly B.; Boyce, Walter M.
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species. PMID:28609466
Zeller, Katherine A; Vickers, T Winston; Ernest, Holly B; Boyce, Walter M
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species.
Automated Image Registration Using Morphological Region of Interest Feature Extraction
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2005-01-01
With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.
NASA Astrophysics Data System (ADS)
Yang, Xiang
2017-11-01
The sizes of fluid motions in wall-bounded flows scale approximately as their distances from the wall. At high Reynolds numbers, resolving near-wall, small-scale, yet momentum-transferring eddies are computationally intensive, and to alleviate the strict near-wall grid resolution requirement, a wall model is usually used. The wall model of interest here is the integral wall model. This model parameterizes the near-wall sub-grid velocity profile as being comprised of a linear inner-layer and a logarithmic meso-layer with one additional term that accounts for the effects of flow acceleration, pressure gradients etc. We use the integral wall model for wall-modeled large-eddy simulations (WMLES) of turbulent boundary layers over rough walls. The effects of rough-wall topology on drag forces are investigated. A rough-wall model is then developed based on considerations of such effects, which are now known as mutual sheltering among roughness elements. Last, we discuss briefly a new interpretation of the Townsend attached eddy hypothesis-the hierarchical random additive process model (HRAP). The analogy between the energy cascade and the momentum cascade is mathematically formal as HRAP follows the multi-fractal formulism, which was extensively used for the energy cascade.
Multi-parametric centrality method for graph network models
NASA Astrophysics Data System (ADS)
Ivanov, Sergei Evgenievich; Gorlushkina, Natalia Nikolaevna; Ivanova, Lubov Nikolaevna
2018-04-01
The graph model networks are investigated to determine centrality, weights and the significance of vertices. For centrality analysis appliesa typical method that includesany one of the properties of graph vertices. In graph theory, methods of analyzing centrality are used: in terms by degree, closeness, betweenness, radiality, eccentricity, page-rank, status, Katz and eigenvector. We have proposed a new method of multi-parametric centrality, which includes a number of basic properties of the network member. The mathematical model of multi-parametric centrality method is developed. Comparison of results for the presented method with the centrality methods is carried out. For evaluate the results for the multi-parametric centrality methodthe graph model with hundreds of vertices is analyzed. The comparative analysis showed the accuracy of presented method, includes simultaneously a number of basic properties of vertices.
Muir, Ryan D.; Pogranichney, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.
2014-01-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment. PMID:25178010
Muir, Ryan D; Pogranichney, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J
2014-09-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment.
NASA Astrophysics Data System (ADS)
Chen, Miawjane; Yan, Shangyao; Wang, Sin-Siang; Liu, Chiu-Lan
2015-02-01
An effective project schedule is essential for enterprises to increase their efficiency of project execution, to maximize profit, and to minimize wastage of resources. Heuristic algorithms have been developed to efficiently solve the complicated multi-mode resource-constrained project scheduling problem with discounted cash flows (MRCPSPDCF) that characterize real problems. However, the solutions obtained in past studies have been approximate and are difficult to evaluate in terms of optimality. In this study, a generalized network flow model, embedded in a time-precedence network, is proposed to formulate the MRCPSPDCF with the payment at activity completion times. Mathematically, the model is formulated as an integer network flow problem with side constraints, which can be efficiently solved for optimality, using existing mathematical programming software. To evaluate the model performance, numerical tests are performed. The test results indicate that the model could be a useful planning tool for project scheduling in the real world.
Quality by design: scale-up of freeze-drying cycles in pharmaceutical industry.
Pisano, Roberto; Fissore, Davide; Barresi, Antonello A; Rastelli, Massimo
2013-09-01
This paper shows the application of mathematical modeling to scale-up a cycle developed with lab-scale equipment on two different production units. The above method is based on a simplified model of the process parameterized with experimentally determined heat and mass transfer coefficients. In this study, the overall heat transfer coefficient between product and shelf was determined by using the gravimetric procedure, while the dried product resistance to vapor flow was determined through the pressure rise test technique. Once model parameters were determined, the freeze-drying cycle of a parenteral product was developed via dynamic design space for a lab-scale unit. Then, mathematical modeling was used to scale-up the above cycle in the production equipment. In this way, appropriate values were determined for processing conditions, which allow the replication, in the industrial unit, of the product dynamics observed in the small scale freeze-dryer. This study also showed how inter-vial variability, as well as model parameter uncertainty, can be taken into account during scale-up calculations.
Optimal harvesting for a predator-prey agent-based model using difference equations.
Oremland, Matthew; Laubenbacher, Reinhard
2015-03-01
In this paper, a method known as Pareto optimization is applied in the solution of a multi-objective optimization problem. The system in question is an agent-based model (ABM) wherein global dynamics emerge from local interactions. A system of discrete mathematical equations is formulated in order to capture the dynamics of the ABM; while the original model is built up analytically from the rules of the model, the paper shows how minor changes to the ABM rule set can have a substantial effect on model dynamics. To address this issue, we introduce parameters into the equation model that track such changes. The equation model is amenable to mathematical theory—we show how stability analysis can be performed and validated using ABM data. We then reduce the equation model to a simpler version and implement changes to allow controls from the ABM to be tested using the equations. Cohen's weighted κ is proposed as a measure of similarity between the equation model and the ABM, particularly with respect to the optimization problem. The reduced equation model is used to solve a multi-objective optimization problem via a technique known as Pareto optimization, a heuristic evolutionary algorithm. Results show that the equation model is a good fit for ABM data; Pareto optimization provides a suite of solutions to the multi-objective optimization problem that can be implemented directly in the ABM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perdikaris, Paris, E-mail: parisp@mit.edu; Grinberg, Leopold, E-mail: leopoldgrinberg@us.ibm.com; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu
The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process takingmore » place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram
This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. Themore » macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.« less
Advanced core-analyses for subsurface characterization
NASA Astrophysics Data System (ADS)
Pini, R.
2017-12-01
The heterogeneity of geological formations varies over a wide range of length scales and represents a major challenge for predicting the movement of fluids in the subsurface. Although they are inherently limited in the accessible length-scale, laboratory measurements on reservoir core samples still represent the only way to make direct observations on key transport properties. Yet, properties derived on these samples are of limited use and should be regarded as sample-specific (or `pseudos'), if the presence of sub-core scale heterogeneities is not accounted for in data processing and interpretation. The advent of imaging technology has significantly reshaped the landscape of so-called Special Core Analysis (SCAL) by providing unprecedented insight on rock structure and processes down to the scale of a single pore throat (i.e. the scale at which all reservoir processes operate). Accordingly, improved laboratory workflows are needed that make use of such wealth of information by e.g., referring to the internal structure of the sample and in-situ observations, to obtain accurate parameterisation of both rock- and flow-properties that can be used to populate numerical models. We report here on the development of such workflow for the study of solute mixing and dispersion during single- and multi-phase flows in heterogeneous porous systems through a unique combination of two complementary imaging techniques, namely X-ray Computed Tomography (CT) and Positron Emission Tomography (PET). The experimental protocol is applied to both synthetic and natural porous media, and it integrates (i) macroscopic observations (tracer effluent curves), (ii) sub-core scale parameterisation of rock heterogeneities (e.g., porosity, permeability and capillary pressure), and direct 3D observation of (iii) fluid saturation distribution and (iv) the dynamic spreading of the solute plumes. Suitable mathematical models are applied to reproduce experimental observations, including both 1D and 3D numerical schemes populated with the parameterisation above. While it validates the core-flooding experiments themselves, the calibrated mathematical model represents a key element for extending them to conditions prevalent in the subsurface, which would be otherwise not attainable in the laboratory.
Constructing Rigorous and Broad Biosurveillance Networks for Detecting Emerging Zoonotic Outbreaks
Brown, Mac; Moore, Leslie; McMahon, Benjamin; Powell, Dennis; LaBute, Montiago; Hyman, James M.; Rivas, Ariel; Jankowski, Mark; Berendzen, Joel; Loeppky, Jason; Manore, Carrie; Fair, Jeanne
2015-01-01
Determining optimal surveillance networks for an emerging pathogen is difficult since it is not known beforehand what the characteristics of a pathogen will be or where it will emerge. The resources for surveillance of infectious diseases in animals and wildlife are often limited and mathematical modeling can play a supporting role in examining a wide range of scenarios of pathogen spread. We demonstrate how a hierarchy of mathematical and statistical tools can be used in surveillance planning help guide successful surveillance and mitigation policies for a wide range of zoonotic pathogens. The model forecasts can help clarify the complexities of potential scenarios, and optimize biosurveillance programs for rapidly detecting infectious diseases. Using the highly pathogenic zoonotic H5N1 avian influenza 2006-2007 epidemic in Nigeria as an example, we determined the risk for infection for localized areas in an outbreak and designed biosurveillance stations that are effective for different pathogen strains and a range of possible outbreak locations. We created a general multi-scale, multi-host stochastic SEIR epidemiological network model, with both short and long-range movement, to simulate the spread of an infectious disease through Nigerian human, poultry, backyard duck, and wild bird populations. We chose parameter ranges specific to avian influenza (but not to a particular strain) and used a Latin hypercube sample experimental design to investigate epidemic predictions in a thousand simulations. We ranked the risk of local regions by the number of times they became infected in the ensemble of simulations. These spatial statistics were then complied into a potential risk map of infection. Finally, we validated the results with a known outbreak, using spatial analysis of all the simulation runs to show the progression matched closely with the observed location of the farms infected in the 2006-2007 epidemic. PMID:25946164
Basic numerical competences in large-scale assessment data: Structure and long-term relevance.
Hirsch, Stefa; Lambert, Katharina; Coppens, Karien; Moeller, Korbinian
2018-03-01
Basic numerical competences are seen as building blocks for later numerical and mathematical achievement. The current study aimed at investigating the structure of early numeracy reflected by different basic numerical competences in kindergarten and its predictive value for mathematical achievement 6 years later using data from large-scale assessment. This allowed analyses based on considerably large sample sizes (N > 1700). A confirmatory factor analysis indicated that a model differentiating five basic numerical competences at the end of kindergarten fitted the data better than a one-factor model of early numeracy representing a comprehensive number sense. In addition, these basic numerical competences were observed to reliably predict performance in a curricular mathematics test in Grade 6 even after controlling for influences of general cognitive ability. Thus, our results indicated a differentiated view on early numeracy considering basic numerical competences in kindergarten reflected in large-scale assessment data. Consideration of different basic numerical competences allows for evaluating their specific predictive value for later mathematical achievement but also mathematical learning difficulties. Copyright © 2017 Elsevier Inc. All rights reserved.
Tackling some of the most intricate geophysical challenges via high-performance computing
NASA Astrophysics Data System (ADS)
Khosronejad, A.
2016-12-01
Recently, world has been witnessing significant enhancements in computing power of supercomputers. Computer clusters in conjunction with the advanced mathematical algorithms has set the stage for developing and applying powerful numerical tools to tackle some of the most intricate geophysical challenges that today`s engineers face. One such challenge is to understand how turbulent flows, in real-world settings, interact with (a) rigid and/or mobile complex bed bathymetry of waterways and sea-beds in the coastal areas; (b) objects with complex geometry that are fully or partially immersed; and (c) free-surface of waterways and water surface waves in the coastal area. This understanding is especially important because the turbulent flows in real-world environments are often bounded by geometrically complex boundaries, which dynamically deform and give rise to multi-scale and multi-physics transport phenomena, and characterized by multi-lateral interactions among various phases (e.g. air/water/sediment phases). Herein, I present some of the multi-scale and multi-physics geophysical fluid mechanics processes that I have attempted to study using an in-house high-performance computational model, the so-called VFS-Geophysics. More specifically, I will present the simulation results of turbulence/sediment/solute/turbine interactions in real-world settings. Parts of the simulations I present are performed to gain scientific insights into the processes such as sand wave formation (A. Khosronejad, and F. Sotiropoulos, (2014), Numerical simulation of sand waves in a turbulent open channel flow, Journal of Fluid Mechanics, 753:150-216), while others are carried out to predict the effects of climate change and large flood events on societal infrastructures ( A. Khosronejad, et al., (2016), Large eddy simulation of turbulence and solute transport in a forested headwater stream, Journal of Geophysical Research:, doi: 10.1002/2014JF003423).
Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers
NASA Astrophysics Data System (ADS)
Samiei-Esfahany, Sami; Hanssen, Ramon F.
2012-01-01
The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.
This paper presents the formulation and evaluation of a mechanistic mathematical model of fathead minnow ovarian steroidogenesis. The model presented in the present study was adpated from other models developed as part of an integrated, multi-disciplinary computational toxicolog...
Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B
2015-01-01
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
A Goddard Multi-Scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.;
2008-01-01
Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.
Localized Scale Coupling and New Educational Paradigms in Multiscale Mathematics and Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
LEAL, L. GARY
2013-06-30
One of the most challenging multi-scale simulation problems in the area of multi-phase materials is to develop effective computational techniques for the prediction of coalescence and related phenomena involving rupture of a thin liquid film due to the onset of instability driven by van der Waals or other micro-scale attractive forces. Accurate modeling of this process is critical to prediction of the outcome of milling processes for immiscible polymer blends, one of the most important routes to new advanced polymeric materials. In typical situations, the blend evolves into an ?emulsion? of dispersed phase drops in a continuous matrix fluid. Coalescencemore » is then a critical factor in determining the size distribution of the dispersed phase, but is extremely difficult to predict from first principles. The thin film separating two drops may only achieve rupture at dimensions of approximately 10 nm while the drop sizes are 0(10 ?m). It is essential to achieve very accurate solutions for the flow and for the interface shape at both the macroscale of the full drops, and within the thin film (where the destabilizing disjoining pressure due to van der Waals forces is proportional approximately to the inverse third power of the local film thickness, h-3). Furthermore, the fluids of interest are polymeric (through Newtonian) and the classical continuum description begins to fail as the film thins ? requiring incorporation of molecular effects, such as a hybrid code that incorporates a version of coarse grain molecular dynamics within the thin film coupled with a classical continuum description elsewhere in the flow domain. Finally, the presence of surface active additions, either surfactants (in the form of di-block copolymers) or surface-functionalized micro- or nano-scale particles, adds an additional level of complexity, requiring development of a distinct numerical method to predict the nonuniform concentration gradients of these additives that are responsible for Marangoni stresses at the interface. Again, the physical dimensions of these additives may become comparable to the thin film dimensions, requiring an additional layer of multi-scale modeling.« less
Gardner, Shea Nicole
2007-10-23
A method and system for tailoring treatment regimens to individual patients with diseased cells exhibiting evolution of resistance to such treatments. A mathematical model is provided which models rates of population change of proliferating and quiescent diseased cells using cell kinetics and evolution of resistance of the diseased cells, and pharmacokinetic and pharmacodynamic models. Cell kinetic parameters are obtained from an individual patient and applied to the mathematical model to solve for a plurality of treatment regimens, each having a quantitative efficacy value associated therewith. A treatment regimen may then be selected from the plurlaity of treatment options based on the efficacy value.
Beyond theories of plant invasions: Lessons from natural landscapes
Stohlgren, Thomas J.
2002-01-01
There are a growing number of contrasting theories about plant invasions, but most are only weakly supported by small-scale field experiments, observational studies, and mathematical models. Among the most contentious theories is that species-rich habitats should be less vulnerable to plant invasion than species-poor sites, stemming from earlier theories that competition is a major force in structuring plant communities. Early ecologists such as Charles Darwin (1859) and Charles Elton (1958) suggested that a lack of intense interspecific competition on islands made these low-diversity habitats vulnerable to invasion. Small-scale field experiments have supported and contradicted this theory, as have various mathematical models. In contrast, many large-scale observational studies and detailed vegetation surveys in continental areas often report that species-rich areas are more heavily invaded than species-poor areas, but there are exceptions here as well. In this article, I show how these seemingly contrasting patterns converge once appropriate spatial and temporal scales are considered in complex natural environments. I suggest ways in which small-scale experiments, mathematical models, and large- scale observational studies can be improved and better integrated to advance a theoretically based understanding of plant invasions.
Objective biofidelity rating of a numerical human occupant model in frontal to lateral impact.
de Lange, Ronald; van Rooij, Lex; Mooi, Herman; Wismans, Jac
2005-11-01
Both hardware crash dummies and mathematical human models have been developed largely using the same biomechanical data. For both, biofidelity is a main requirement. Since numerical modeling is not bound to hardware crash dummy design constraints, it allows more detailed modeling of the human and offering biofidelity for multiple directions. In this study the multi-directional biofidelity of the MADYMO human occupant model is assessed, to potentially protect occupants under various impact conditions. To evaluate the model's biofidelity, generally accepted requirements were used for frontal and lateral impact: tests proposed by EEVC and NHTSA and tests specified by ISO TR9790, respectively. A subset of the specified experiments was simulated with the human model. For lateral impact, the results were objectively rated according to the ISO protocol. Since no rating protocol was available for frontal impact, the ISO rating scheme for lateral was used for frontal, as far as possible. As a result, two scores show the overall model biofidelity for frontal and lateral impact, while individual ratings provide insight in the quality on body segment level. The results were compared with the results published for the THOR and WorldSID dummies, showing that the mathematical model exhibits a high level of multi-directional biofidelity. In addition, the performance of the human model in the NBDL 11G oblique test indicates a valid behavior of the model in intermediate directions as well. A new aspect of this study is the objective assessment of the multi-directional biofidelity of the mathematical human model according to accepted requirements. Although hardware dummies may always be used in regulations, it is expected that virtual testing with human models will serve in extrapolating outside the hardware test environment. This study was a first step towards simulating a wider range of impact conditions, such as angled impact and rollover.
Program Helps Generate Boundary-Element Mathematical Models
NASA Technical Reports Server (NTRS)
Goldberg, R. K.
1995-01-01
Composite Model Generation-Boundary Element Method (COM-GEN-BEM) computer program significantly reduces time and effort needed to construct boundary-element mathematical models of continuous-fiber composite materials at micro-mechanical (constituent) scale. Generates boundary-element models compatible with BEST-CMS boundary-element code for anlaysis of micromechanics of composite material. Written in PATRAN Command Language (PCL).
NASA Astrophysics Data System (ADS)
Taher, M.; Hamidah, I.; Suwarma, I. R.
2017-09-01
This paper outlined the results of an experimental study on the effects of multi-representation approach in learning Archimedes Law on students’ mental model improvement. The multi-representation techniques implemented in the study were verbal, pictorial, mathematical, and graphical representations. Students’ mental model was classified into three levels, i.e. scientific, synthetic, and initial levels, based on the students’ level of understanding. The present study employed the pre-experimental methodology, using one group pretest-posttest design. The subject of the study was 32 eleventh grade students in a Public Senior High School in Riau Province. The research instrument included model mental test on hydrostatic pressure concept, in the form of essay test judged by experts. The findings showed that there was positive change in students’ mental model, indicating that multi-representation approach was effective to improve students’ mental model.
Kirkilionis, Markus; Janus, Ulrich; Sbano, Luca
2011-09-01
We model in detail a simple synthetic genetic clock that was engineered in Atkinson et al. (Cell 113(5):597-607, 2003) using Escherichia coli as a host organism. Based on this engineered clock its theoretical description uses the modelling framework presented in Kirkilionis et al. (Theory Biosci. doi: 10.1007/s12064-011-0125-0 , 2011, this volume). The main goal of this accompanying article was to illustrate that parts of the modelling process can be algorithmically automatised once the model framework we called 'average dynamics' is accepted (Sbano and Kirkilionis, WMI Preprint 7/2007, 2008c; Kirkilionis and Sbano, Adv Complex Syst 13(3):293-326, 2010). The advantage of the 'average dynamics' framework is that system components (especially in genetics) can be easier represented in the model. In particular, if once discovered and characterised, specific molecular players together with their function can be incorporated. This means that, for example, the 'gene' concept becomes more clear, for example, in the way the genetic component would react under different regulatory conditions. Using the framework it has become a realistic aim to link mathematical modelling to novel tools of bioinformatics in the future, at least if the number of regulatory units can be estimated. This should hold in any case in synthetic environments due to the fact that the different synthetic genetic components are simply known (Elowitz and Leibler, Nature 403(6767):335-338, 2000; Gardner et al., Nature 403(6767):339-342, 2000; Hasty et al., Nature 420(6912):224-230, 2002). The paper illustrates therefore as a necessary first step how a detailed modelling of molecular interactions with known molecular components leads to a dynamic mathematical model that can be compared to experimental results on various levels or scales. The different genetic modules or components are represented in different detail by model variants. We explain how the framework can be used for investigating other more complex genetic systems in terms of regulation and feedback.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philip, Bobby
2012-06-01
The Advanced Multi-Physics (AMP) code, in its present form, will allow a user to build a multi-physics application code for existing mechanics and diffusion operators and extend them with user-defined material models and new physics operators. There are examples that demonstrate mechanics, thermo-mechanics, coupled diffusion, and mechanical contact. The AMP code is designed to leverage a variety of mathematical solvers (PETSc, Trilinos, SUNDIALS, and AMP solvers) and mesh databases (LibMesh and AMP) in a consistent interchangeable approach.
Research and application of multi-agent genetic algorithm in tower defense game
NASA Astrophysics Data System (ADS)
Jin, Shaohua
2018-04-01
In this paper, a new multi-agent genetic algorithm based on orthogonal experiment is proposed, which is based on multi-agent system, genetic algorithm and orthogonal experimental design. The design of neighborhood competition operator, orthogonal crossover operator, Son and self-learning operator. The new algorithm is applied to mobile tower defense game, according to the characteristics of the game, the establishment of mathematical models, and finally increases the value of the game's monster.
Construction of multi-scale consistent brain networks: methods and applications.
Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
The interactions between vegetation and hydrology in mountainous terrain are difficult to represent in mathematical models. There are at least three primary reasons for this difficulty. First, expanding plot-scale measurements to the watershed scale requires finding the balance...
ERIC Educational Resources Information Center
Cuenca-Carlino, Yojanna; Freeman-Green, Shaqwana; Stephenson, Grant W.; Hauth, Clara
2016-01-01
Six middle school students identified as having a specific learning disability or at risk for mathematical difficulties were taught how to solve multi-step equations by using the self-regulated strategy development (SRSD) model of instruction. A multiple-probe-across-pairs design was used to evaluate instructional effects. Instruction was provided…
Svetlana A. (Kushch) Schroder; Sandor F. Toth; Robert L. Deal; Gregory J. Ettl
2016-01-01
Forest owners worldwide are increasingly interested in managing forests to provide a broad suite of Ecosystem services, balancing multiple objectives and evaluating management activities in terms of Potential tradeoffs. We describe a multi-objective mathematical programming model to quantify tradeoffs in expected sediment delivery and the preservation of Northern...
NASA Astrophysics Data System (ADS)
Mansor, Zakwan; Zakaria, Mohd Zakimi; Nor, Azuwir Mohd; Saad, Mohd Sazli; Ahmad, Robiah; Jamaluddin, Hishamuddin
2017-09-01
This paper presents the black-box modelling of palm oil biodiesel engine (POB) using multi-objective optimization differential evolution (MOODE) algorithm. Two objective functions are considered in the algorithm for optimization; minimizing the number of term of a model structure and minimizing the mean square error between actual and predicted outputs. The mathematical model used in this study to represent the POB system is nonlinear auto-regressive moving average with exogenous input (NARMAX) model. Finally, model validity tests are applied in order to validate the possible models that was obtained from MOODE algorithm and lead to select an optimal model.
The Onset of Type 2 Diabetes: Proposal for a Multi-Scale Model
Tieri, Paolo; De Graaf, Albert; Franceschi, Claudio; Liò, Pietro; Van Ommen, Ben; Mazzà, Claudia; Tuchel, Alexander; Bernaschi, Massimo; Samson, Clare; Colombo, Teresa; Castellani, Gastone C; Capri, Miriam; Garagnani, Paolo; Salvioli, Stefano; Nguyen, Viet Anh; Bobeldijk-Pastorova, Ivana; Krishnan, Shaji; Cappozzo, Aurelio; Sacchetti, Massimo; Morettini, Micaela; Ernst, Marc
2013-01-01
Background Type 2 diabetes mellitus (T2D) is a common age-related disease, and is a major health concern, particularly in developed countries where the population is aging, including Europe. The multi-scale immune system simulator for the onset of type 2 diabetes (MISSION-T2D) is a European Union-funded project that aims to develop and validate an integrated, multilevel, and patient-specific model, incorporating genetic, metabolic, and nutritional data for the simulation and prediction of metabolic and inflammatory processes in the onset and progression of T2D. The project will ultimately provide a tool for diagnosis and clinical decision making that can estimate the risk of developing T2D and predict its progression in response to possible therapies. Recent data showed that T2D and its complications, specifically in the heart, kidney, retina, and feet, should be considered a systemic disease that is sustained by a pervasive, metabolically-driven state of inflammation. Accordingly, there is an urgent need (1) to understand the complex mechanisms underpinning the onset of this disease, and (2) to identify early patient-specific diagnostic parameters and related inflammatory indicators. Objective We aim to accomplish this mission by setting up a multi-scale model to study the systemic interactions of the biological mechanisms involved in response to a variety of nutritional and metabolic stimuli and stressors. Methods Specifically, we will be studying the biological mechanisms of immunological/inflammatory processes, energy intake/expenditure ratio, and cell cycle rate. The overall architecture of the model will exploit an already established immune system simulator as well as several discrete and continuous mathematical methods for modeling of the processes critically involved in the onset and progression of T2D. We aim to validate the predictions of our models using actual biological and clinical data. Results This study was initiated in March 2013 and is expected to be completed by February 2016. Conclusions MISSION-T2D aims to pave the way for translating validated multilevel immune-metabolic models into the clinical setting of T2D. This approach will eventually generate predictive biomarkers for this disease from the integration of clinical data with metabolic, nutritional, immune/inflammatory, genetic, and gut microbiota profiles. Eventually, it should prove possible to translate these into cost-effective and mobile-based diagnostic tools. PMID:24176906
High-resolution time-frequency representation of EEG data using multi-scale wavelets
NASA Astrophysics Data System (ADS)
Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina
2017-09-01
An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.
Hankins, Catherine; Warren, Mitchell; Njeuhmeli, Emmanuel
2016-01-01
Over 11 million voluntary medical male circumcisions (VMMC) have been performed of the projected 20.3 million needed to reach 80% adult male circumcision prevalence in priority sub-Saharan African countries. Striking numbers of adolescent males, outside the 15-49-year-old age target, have been accessing VMMC services. What are the implications of overall progress in scale-up to date? Can mathematical modeling provide further insights on how to efficiently reach the male circumcision coverage levels needed to create and sustain further reductions in HIV incidence to make AIDS no longer a public health threat by 2030? Considering ease of implementation and cultural acceptability, decision makers may also value the estimates that mathematical models can generate of immediacy of impact, cost-effectiveness, and magnitude of impact resulting from different policy choices. This supplement presents the results of mathematical modeling using the Decision Makers' Program Planning Tool Version 2.0 (DMPPT 2.0), the Actuarial Society of South Africa (ASSA2008) model, and the age structured mathematical (ASM) model. These models are helping countries examine the potential effects on program impact and cost-effectiveness of prioritizing specific subpopulations for VMMC services, for example, by client age, HIV-positive status, risk group, and geographical location. The modeling also examines long-term sustainability strategies, such as adolescent and/or early infant male circumcision, to preserve VMMC coverage gains achieved during rapid scale-up. The 2016-2021 UNAIDS strategy target for VMMC is an additional 27 million VMMC in high HIV-prevalence settings by 2020, as part of access to integrated sexual and reproductive health services for men. To achieve further scale-up, a combination of evidence, analysis, and impact estimates can usefully guide strategic planning and funding of VMMC services and related demand-creation strategies in priority countries. Mid-course corrections now can improve cost-effectiveness and scale to achieve the impact needed to help turn the HIV pandemic on its head within 15 years.
Large-scale budget applications of mathematical programming in the Forest Service
Malcolm Kirby
1978-01-01
Mathematical programming applications in the Forest Service, U.S. Department of Agriculture, are growing. They are being used for widely varying problems: budgeting, lane use planning, timber transport, road maintenance and timber harvest planning. Large-scale applications are being mace in budgeting. The model that is described can be used by developing economies....
Powathil, Gibin G.; Adamson, Douglas J. A.; Chaplain, Mark A. J.
2013-01-01
In this paper we use a hybrid multiscale mathematical model that incorporates both individual cell behaviour through the cell-cycle and the effects of the changing microenvironment through oxygen dynamics to study the multiple effects of radiation therapy. The oxygenation status of the cells is considered as one of the important prognostic markers for determining radiation therapy, as hypoxic cells are less radiosensitive. Another factor that critically affects radiation sensitivity is cell-cycle regulation. The effects of radiation therapy are included in the model using a modified linear quadratic model for the radiation damage, incorporating the effects of hypoxia and cell-cycle in determining the cell-cycle phase-specific radiosensitivity. Furthermore, after irradiation, an individual cell's cell-cycle dynamics are intrinsically modified through the activation of pathways responsible for repair mechanisms, often resulting in a delay/arrest in the cell-cycle. The model is then used to study various combinations of multiple doses of cell-cycle dependent chemotherapies and radiation therapy, as radiation may work better by the partial synchronisation of cells in the most radiosensitive phase of the cell-cycle. Moreover, using this multi-scale model, we investigate the optimum sequencing and scheduling of these multi-modality treatments, and the impact of internal and external heterogeneity on the spatio-temporal patterning of the distribution of tumour cells and their response to different treatment schedules. PMID:23874170
Mathematical modelling in developmental biology.
Vasieva, Olga; Rasolonjanahary, Manan'Iarivo; Vasiev, Bakhtier
2013-06-01
In recent decades, molecular and cellular biology has benefited from numerous fascinating developments in experimental technique, generating an overwhelming amount of data on various biological objects and processes. This, in turn, has led biologists to look for appropriate tools to facilitate systematic analysis of data. Thus, the need for mathematical techniques, which can be used to aid the classification and understanding of this ever-growing body of experimental data, is more profound now than ever before. Mathematical modelling is becoming increasingly integrated into biological studies in general and into developmental biology particularly. This review outlines some achievements of mathematics as applied to developmental biology and demonstrates the mathematical formulation of basic principles driving morphogenesis. We begin by describing a mathematical formalism used to analyse the formation and scaling of morphogen gradients. Then we address a problem of interplay between the dynamics of morphogen gradients and movement of cells, referring to mathematical models of gastrulation in the chick embryo. In the last section, we give an overview of various mathematical models used in the study of the developmental cycle of Dictyostelium discoideum, which is probably the best example of successful mathematical modelling in developmental biology.
Mathematical Models for Immunology: Current State of the Art and Future Research Directions.
Eftimie, Raluca; Gillard, Joseph J; Cantrell, Doreen A
2016-10-01
The advances in genetics and biochemistry that have taken place over the last 10 years led to significant advances in experimental and clinical immunology. In turn, this has led to the development of new mathematical models to investigate qualitatively and quantitatively various open questions in immunology. In this study we present a review of some research areas in mathematical immunology that evolved over the last 10 years. To this end, we take a step-by-step approach in discussing a range of models derived to study the dynamics of both the innate and immune responses at the molecular, cellular and tissue scales. To emphasise the use of mathematics in modelling in this area, we also review some of the mathematical tools used to investigate these models. Finally, we discuss some future trends in both experimental immunology and mathematical immunology for the upcoming years.
visPIG--a web tool for producing multi-region, multi-track, multi-scale plots of genetic data.
Scales, Matthew; Jäger, Roland; Migliorini, Gabriele; Houlston, Richard S; Henrion, Marc Y R
2014-01-01
We present VISual Plotting Interface for Genetics (visPIG; http://vispig.icr.ac.uk), a web application to produce multi-track, multi-scale, multi-region plots of genetic data. visPIG has been designed to allow users not well versed with mathematical software packages and/or programming languages such as R, Matlab®, Python, etc., to integrate data from multiple sources for interpretation and to easily create publication-ready figures. While web tools such as the UCSC Genome Browser or the WashU Epigenome Browser allow custom data uploads, such tools are primarily designed for data exploration. This is also true for the desktop-run Integrative Genomics Viewer (IGV). Other locally run data visualisation software such as Circos require significant computer skills of the user. The visPIG web application is a menu-based interface that allows users to upload custom data tracks and set track-specific parameters. Figures can be downloaded as PDF or PNG files. For sensitive data, the underlying R code can also be downloaded and run locally. visPIG is multi-track: it can display many different data types (e.g association, functional annotation, intensity, interaction, heat map data,…). It also allows annotation of genes and other custom features in the plotted region(s). Data tracks can be plotted individually or on a single figure. visPIG is multi-region: it supports plotting multiple regions, be they kilo- or megabases apart or even on different chromosomes. Finally, visPIG is multi-scale: a sub-region of particular interest can be 'zoomed' in. We describe the various features of visPIG and illustrate its utility with examples. visPIG is freely available through http://vispig.icr.ac.uk under a GNU General Public License (GPLv3).
NASA Astrophysics Data System (ADS)
Qiu, J. P.; Niu, D. X.
Micro-grid is one of the key technologies of the future energy supplies. Take economic planning. reliability, and environmental protection of micro grid as a basis for the analysis of multi-strategy objective programming problems for micro grid which contains wind power, solar power, and battery and micro gas turbine. Establish the mathematical model of each power generation characteristics and energy dissipation. and change micro grid planning multi-objective function under different operating strategies to a single objective model based on AHP method. Example analysis shows that in combination with dynamic ant mixed genetic algorithm can get the optimal power output of this model.
Towards Personalized Cardiology: Multi-Scale Modeling of the Failing Heart
Amr, Ali; Neumann, Dominik; Georgescu, Bogdan; Seegerer, Philipp; Kamen, Ali; Haas, Jan; Frese, Karen S.; Irawati, Maria; Wirsz, Emil; King, Vanessa; Buss, Sebastian; Mereles, Derliz; Zitron, Edgar; Keller, Andreas; Katus, Hugo A.; Comaniciu, Dorin; Meder, Benjamin
2015-01-01
Background Despite modern pharmacotherapy and advanced implantable cardiac devices, overall prognosis and quality of life of HF patients remain poor. This is in part due to insufficient patient stratification and lack of individualized therapy planning, resulting in less effective treatments and a significant number of non-responders. Methods and Results State-of-the-art clinical phenotyping was acquired, including magnetic resonance imaging (MRI) and biomarker assessment. An individualized, multi-scale model of heart function covering cardiac anatomy, electrophysiology, biomechanics and hemodynamics was estimated using a robust framework. The model was computed on n=46 HF patients, showing for the first time that advanced multi-scale models can be fitted consistently on large cohorts. Novel multi-scale parameters derived from the model of all cases were analyzed and compared against clinical parameters, cardiac imaging, lab tests and survival scores to evaluate the explicative power of the model and its potential for better patient stratification. Model validation was pursued by comparing clinical parameters that were not used in the fitting process against model parameters. Conclusion This paper illustrates how advanced multi-scale models can complement cardiovascular imaging and how they could be applied in patient care. Based on obtained results, it becomes conceivable that, after thorough validation, such heart failure models could be applied for patient management and therapy planning in the future, as we illustrate in one patient of our cohort who received CRT-D implantation. PMID:26230546
Tawhai, Merryn H.; Clark, Alys R.; Burrowes, Kelly S.
2011-01-01
Biophysically-based computational models provide a tool for integrating and explaining experimental data, observations, and hypotheses. Computational models of the pulmonary circulation have evolved from minimal and efficient constructs that have been used to study individual mechanisms that contribute to lung perfusion, to sophisticated multi-scale and -physics structure-based models that predict integrated structure-function relationships within a heterogeneous organ. This review considers the utility of computational models in providing new insights into the function of the pulmonary circulation, and their application in clinically motivated studies. We review mathematical and computational models of the pulmonary circulation based on their application; we begin with models that seek to answer questions in basic science and physiology and progress to models that aim to have clinical application. In looking forward, we discuss the relative merits and clinical relevance of computational models: what important features are still lacking; and how these models may ultimately be applied to further increasing our understanding of the mechanisms occurring in disease of the pulmonary circulation. PMID:22034608
Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES
NASA Astrophysics Data System (ADS)
Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu
2016-11-01
Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.
The mathematical and theoretical biology institute--a model of mentorship through research.
Camacho, Erika T; Kribs-Zaleta, Christopher M; Wirkus, Stephen
2013-01-01
This article details the history, logistical operations, and design philosophy of the Mathematical and Theoretical Biology Institute (MTBI), a nationally recognized research program with an 18-year history of mentoring researchers at every level from high school through university faculty, increasing the number of researchers from historically underrepresented minorities, and motivating them to pursue research careers by allowing them to work on problems of interest to them and supporting them in this endeavor. This mosaic profile highlights how MTBI provides a replicable multi-level model for research mentorship.
1990-12-01
methods are implemented in MATRIXx with the programs SISOTF and MIMOTF respectively. Following the mathe - matical development, the application of these...intent is not to teach any of the methods , it has been written in a manner to significantly assist an individual attempting follow on work. I would...equivalent plant models. A detailed mathematical development of the method used to develop these equivalent LTI plant models is provided. After this inner
Chen, Hai; Liang, Xiaoying; Li, Rui
2013-01-01
Multi-Agent Systems (MAS) offer a conceptual approach to include multi-actor decision making into models of land use change. Through the simulation based on the MAS, this paper tries to show the application of MAS in the micro scale LUCC, and reveal the transformation mechanism of difference scale. This paper starts with a description of the context of MAS research. Then, it adopts the Nested Spatial Choice (NSC) method to construct the multi-scale LUCC decision-making model. And a case study for Mengcha village, Mizhi County, Shaanxi Province is reported. Finally, the potentials and drawbacks of the following approach is discussed and concluded. From our design and implementation of the MAS in multi-scale model, a number of observations and conclusions can be drawn on the implementation and future research directions. (1) The use of the LUCC decision-making and multi-scale transformation framework provides, according to us, a more realistic modeling of multi-scale decision making process. (2) By using continuous function, rather than discrete function, to construct the decision-making of the households is more realistic to reflect the effect. (3) In this paper, attempts have been made to give a quantitative analysis to research the household interaction. And it provides the premise and foundation for researching the communication and learning among the households. (4) The scale transformation architecture constructed in this paper helps to accumulate theory and experience for the interaction research between the micro land use decision-making and the macro land use landscape pattern. Our future research work will focus on: (1) how to rational use risk aversion principle, and put the rule on rotation between household parcels into model. (2) Exploring the methods aiming at researching the household decision-making over a long period, it allows us to find the bridge between the long-term LUCC data and the short-term household decision-making. (3) Researching the quantitative method and model, especially the scenario analysis model which may reflect the interaction among different household types.
Development of mathematical models of environmental physiology
NASA Technical Reports Server (NTRS)
Stolwijk, J. A. J.; Mitchell, J. W.; Nadel, E. R.
1971-01-01
Selected articles concerned with mathematical or simulation models of human thermoregulation are presented. The articles presented include: (1) development and use of simulation models in medicine, (2) model of cardio-vascular adjustments during exercise, (3) effective temperature scale based on simple model of human physiological regulatory response, (4) behavioral approach to thermoregulatory set point during exercise, and (5) importance of skin temperature in sweat regulation.
Carbonell, Felix; Iturria-Medina, Yasser; Evans, Alan C
2018-01-01
Protein misfolding refers to a process where proteins become structurally abnormal and lose their specific 3-dimensional spatial configuration. The histopathological presence of misfolded protein (MP) aggregates has been associated as the primary evidence of multiple neurological diseases, including Prion diseases, Alzheimer's disease, Parkinson's disease, and Creutzfeldt-Jacob disease. However, the exact mechanisms of MP aggregation and propagation, as well as their impact in the long-term patient's clinical condition are still not well understood. With this aim, a variety of mathematical models has been proposed for a better insight into the kinetic rate laws that govern the microscopic processes of protein aggregation. Complementary, another class of large-scale models rely on modern molecular imaging techniques for describing the phenomenological effects of MP propagation over the whole brain. Unfortunately, those neuroimaging-based studies do not take full advantage of the tremendous capabilities offered by the chemical kinetics modeling approach. Actually, it has been barely acknowledged that the vast majority of large-scale models have foundations on previous mathematical approaches that describe the chemical kinetics of protein replication and propagation. The purpose of the current manuscript is to present a historical review about the development of mathematical models for describing both microscopic processes that occur during the MP aggregation and large-scale events that characterize the progression of neurodegenerative MP-mediated diseases.
A theoretical study of the initiation, maintenance and termination of gastric slow wave re-entry
Du, Peng; Paskaranandavadivel, Niranchan; O’Grady, Greg; Tang, Shou-Jiang; Cheng, Leo K.
2015-01-01
Gastric slow wave dysrhythmias are associated with motility disorders. Periods of tachygastria associated with slow wave re-entry were recently recognized as one important dysrhythmia mechanism, but factors promoting and sustaining gastric re-entry are currently unknown. This study reports two experimental forms of gastric re-entry and presents a series of multi-scale models that define criteria for slow wave re-entry initiation, maintenance and termination. High-resolution electrical mapping was conducted in porcine and canine models and two spatiotemporal patterns of re-entrant activities were captured: single-loop rotor and double-loop figure-of-eight. Two separate multi-scale mathematical models were developed to reproduce the velocity and entrainment frequency of these experimental recordings. A single-pulse stimulus was used to invoke a rotor re-entry in the porcine model and a figure-of-eight re-entry in the canine model. In both cases, the simulated re-entrant activities were found to be perpetuated by tachygastria that was accompanied by a reduction in the propagation velocity in the re-entrant pathways. The simulated re-entrant activities were terminated by a single-pulse stimulus targeted at the tip of re-entrant wave, after which normal antegrade propagation was restored by the underlying intrinsic frequency gradient. Main findings: (i) the stability of re-entry is regulated by stimulus timing, intrinsic frequency gradient and conductivity; (ii) tachygastria due to re-entry increases the frequency gradient while showing decreased propagation velocity; (iii) re-entry may be effectively terminated by a targeted stimulus at the core, allowing the intrinsic slow wave conduction system to re-establish itself. PMID:25552487
Differential renormalization-group generators for static and dynamic critical phenomena
NASA Astrophysics Data System (ADS)
Chang, T. S.; Vvedensky, D. D.; Nicoll, J. F.
1992-09-01
The derivation of differential renormalization-group (DRG) equations for applications to static and dynamic critical phenomena is reviewed. The DRG approach provides a self-contained closed-form representation of the Wilson renormalization group (RG) and should be viewed as complementary to the Callan-Symanzik equations used in field-theoretic approaches to the RG. The various forms of DRG equations are derived to illustrate the general mathematical structure of each approach and to point out the advantages and disadvantages for performing practical calculations. Otherwise, the review focuses upon the one-particle-irreducible DRG equations derived by Nicoll and Chang and by Chang, Nicoll, and Young; no attempt is made to provide a general treatise of critical phenomena. A few specific examples are included to illustrate the utility of the DRG approach: the large- n limit of the classical n-vector model (the spherical model), multi- or higher-order critical phenomena, and crit ical dynamics far from equilibrium. The large- n limit of the n-vector model is used to introduce the application of DRG equations to a well-known example, with exact solution obtained for the nonlinear trajectories, generating functions for nonlinear scaling fields, and the equation of state. Trajectory integrals and nonlinear scaling fields within the framework of ɛ-expansions are then discussed for tricritical crossover, and briefly for certain aspects of multi- or higher-order critical points, including the derivation of the Helmholtz free energy and the equation of state. The discussion then turns to critical dynamics with a development of the path integral formulation for general dynamic processes. This is followed by an application to a model far-from-equilibrium system that undergoes a phase transformation analogous to a second-order critical point, the Schlögl model for a chemical instability.
Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation
Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan
2010-01-01
Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939
Description of bioremediation of soils using the model of a multistep system of microorganisms
NASA Astrophysics Data System (ADS)
Lubysheva, A. I.; Potashev, K. A.; Sofinskaya, O. A.
2018-01-01
The paper deals with the development of a mathematical model describing the interaction of a multi-step system of microorganisms in soil polluted with oil products. Each step in this system uses products of vital activity of the previous step to feed. Six different models of the multi-step system are considered. The equipping of the models with coefficients was carried out from the condition of minimizing the residual of the calculated and experimental data using an original algorithm based on the Levenberg-Marquardt method in combination with the Monte Carlo method for the initial approximation finding.
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
Numerical simulation of vortex pyrolysis reactors for condensable tar production from biomass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, R.S.; Bellan, J.
1998-08-01
A numerical study is performed in order to evaluate the performance and optimal operating conditions of vortex pyrolysis reactors used for condensable tar production from biomass. A detailed mathematical model of porous biomass particle pyrolysis is coupled with a compressible Reynolds stress transport model for the turbulent reactor swirling flow. An initial evaluation of particle dimensionality effects is made through comparisons of single- (1D) and multi-dimensional particle simulations and reveals that the 1D particle model results in conservative estimates for total pyrolysis conversion times and tar collection. The observed deviations are due predominantly to geometry effects while directional effects frommore » thermal conductivity and permeability variations are relatively small. Rapid ablative particle heating rates are attributed to a mechanical fragmentation of the biomass particles that is modeled using a critical porosity for matrix breakup. Optimal thermal conditions for tar production are observed for 900 K. Effects of biomass identity, particle size distribution, and reactor geometry and scale are discussed.« less
A Multi-Method Investigation of Mathematics Motivation for Elementary Age Students
ERIC Educational Resources Information Center
Linder, Sandra M.; Smart, Julie B.; Cribbs, Jennifer
2015-01-01
This paper presents the results of a multi-method study examining elementary students with high self-reported levels of mathematics motivation. Second- through fifth-grade students at a Title One school in the southeastern United States completed the Elementary Mathematics Motivation Instrument (EMMI), which examines levels of mathematics…
Interventions in Early Mathematics: Avoiding Pollution and Dilution.
Sarama, Julie; Clements, Douglas H
2017-01-01
Although specific interventions in early mathematics have been successful, few have been brought to scale successfully, especially across the challenging diversity of populations and contexts in the early childhood system in the United States. In this chapter, we analyze a theoretically based scale-up model for early mathematics that was designed to avoid the pollution and dilution that often plagues efforts to achieve broad success. We elaborate the theoretical framework by noting the junctures that are susceptible to dilution or pollution. Then we expatiate the model's guidelines to describe specifically how they were designed and implemented to mitigate pollution and dilution. Finally, we provide evidence regarding the success of these efforts. © 2017 Elsevier Inc. All rights reserved.
Mathematical Simulation for Integrated Linear Fresnel Spectrometer Chip
NASA Technical Reports Server (NTRS)
Park, Yeonjoon; Yoon, Hargoon; Lee, Uhn; King, Glen C.; Choi, Sang H.
2012-01-01
A miniaturized solid-state optical spectrometer chip was designed with a linear gradient-gap Fresnel grating which was mounted perpendicularly to a sensor array surface and simulated for its performance and functionality. Unlike common spectrometers which are based on Fraunhoffer diffraction with a regular periodic line grating, the new linear gradient grating Fresnel spectrometer chip can be miniaturized to a much smaller form-factor into the Fresnel regime exceeding the limit of conventional spectrometers. This mathematical calculation shows that building a tiny motionless multi-pixel microspectrometer chip which is smaller than 1 cubic millimter of optical path volume is possible. The new Fresnel spectrometer chip is proportional to the energy scale (hc/lambda), while the conventional spectrometers are proportional to the wavelength scale (lambda). We report the theoretical optical working principle and new data collection algorithm of the new Fresnel spectrometer to build a compact integrated optical chip.
Mathematical modeling of olive mill waste composting process.
Vasiliadou, Ioanna A; Muktadirul Bari Chowdhury, Abu Khayer Md; Akratos, Christos S; Tekerlekopoulou, Athanasia G; Pavlou, Stavros; Vayenas, Dimitrios V
2015-09-01
The present study aimed at developing an integrated mathematical model for the composting process of olive mill waste. The multi-component model was developed to simulate the composting of three-phase olive mill solid waste with olive leaves and different materials as bulking agents. The modeling system included heat transfer, organic substrate degradation, oxygen consumption, carbon dioxide production, water content change, and biological processes. First-order kinetics were used to describe the hydrolysis of insoluble organic matter, followed by formation of biomass. Microbial biomass growth was modeled with a double-substrate limitation by hydrolyzed available organic substrate and oxygen using Monod kinetics. The inhibitory factors of temperature and moisture content were included in the system. The production and consumption of nitrogen and phosphorous were also included in the model. In order to evaluate the kinetic parameters, and to validate the model, six pilot-scale composting experiments in controlled laboratory conditions were used. Low values of hydrolysis rates were observed (0.002841/d) coinciding with the high cellulose and lignin content of the composting materials used. Model simulations were in good agreement with the experimental results. Sensitivity analysis was performed and the modeling efficiency was determined to further evaluate the model predictions. Results revealed that oxygen simulations were more sensitive on the input parameters of the model compared to those of water, temperature and insoluble organic matter. Finally, the Nash and Sutcliff index (E), showed that the experimental data of insoluble organic matter (E>0.909) and temperature (E>0.678) were better simulated than those of water. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mannan, Ahmad A.; Toya, Yoshihiro; Shimizu, Kazuyuki; McFadden, Johnjoe; Kierzek, Andrzej M.; Rocco, Andrea
2015-01-01
An understanding of the dynamics of the metabolic profile of a bacterial cell is sought from a dynamical systems analysis of kinetic models. This modelling formalism relies on a deterministic mathematical description of enzyme kinetics and their metabolite regulation. However, it is severely impeded by the lack of available kinetic information, limiting the size of the system that can be modelled. Furthermore, the subsystem of the metabolic network whose dynamics can be modelled is faced with three problems: how to parameterize the model with mostly incomplete steady state data, how to close what is now an inherently open system, and how to account for the impact on growth. In this study we address these challenges of kinetic modelling by capitalizing on multi-‘omics’ steady state data and a genome-scale metabolic network model. We use these to generate parameters that integrate knowledge embedded in the genome-scale metabolic network model, into the most comprehensive kinetic model of the central carbon metabolism of E. coli realized to date. As an application, we performed a dynamical systems analysis of the resulting enriched model. This revealed bistability of the central carbon metabolism and thus its potential to express two distinct metabolic states. Furthermore, since our model-informing technique ensures both stable states are constrained by the same thermodynamically feasible steady state growth rate, the ensuing bistability represents a temporal coexistence of the two states, and by extension, reveals the emergence of a phenotypically heterogeneous population. PMID:26469081
Plank, G; Prassl, AJ; Augustin, C
2014-01-01
Despite the evident multiphysics nature of the heart – it is an electrically controlled mechanical pump – most modeling studies considered electrophysiology and mechanics in isolation. In no small part, this is due to the formidable modeling challenges involved in building strongly coupled anatomically accurate and biophyically detailed multi-scale multi-physics models of cardiac electro-mechanics. Among the main challenges are the selection of model components and their adjustments to achieve integration into a consistent organ-scale model, dealing with technical difficulties such as the exchange of data between electro-physiological and mechanical model, particularly when using different spatio-temporal grids for discretization, and, finally, the implementation of advanced numerical techniques to deal with the substantial computational. In this study we report on progress made in developing a novel modeling framework suited to tackle these challenges. PMID:24043050
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
Djuris, Jelena; Djuric, Zorica
2017-11-30
Mathematical models can be used as an integral part of the quality by design (QbD) concept throughout the product lifecycle for variety of purposes, including appointment of the design space and control strategy, continual improvement and risk assessment. Examples of different mathematical modeling techniques (mechanistic, empirical and hybrid) in the pharmaceutical development and process monitoring or control are provided in the presented review. In the QbD context, mathematical models are predominantly used to support design space and/or control strategies. Considering their impact to the final product quality, models can be divided into the following categories: high, medium and low impact models. Although there are regulatory guidelines on the topic of modeling applications, review of QbD-based submission containing modeling elements revealed concerns regarding the scale-dependency of design spaces and verification of models predictions at commercial scale of manufacturing, especially regarding real-time release (RTR) models. Authors provide critical overview on the good modeling practices and introduce concepts of multiple-unit, adaptive and dynamic design space, multivariate specifications and methods for process uncertainty analysis. RTR specification with mathematical model and different approaches to multivariate statistical process control supporting process analytical technologies are also presented. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lin, Shian-Jiann; Harris, Lucas; Chen, Jan-Huey; Zhao, Ming
2014-05-01
A multi-scale High-Resolution Atmosphere Model (HiRAM) is being developed at NOAA/Geophysical Fluid Dynamics Laboratory. The model's dynamical framework is the non-hydrostatic extension of the vertically Lagrangian finite-volume dynamical core (Lin 2004, Monthly Wea. Rev.) constructed on a stretchable (via Schmidt transformation) cubed-sphere grid. Physical parametrizations originally designed for IPCC-type climate predictions are in the process of being modified and made more "scale-aware", in an effort to make the model suitable for multi-scale weather-climate applications, with horizontal resolution ranging from 1 km (near the target high-resolution region) to as low as 400 km (near the antipodal point). One of the main goals of this development is to enable simulation of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously thought impossible. We will present preliminary results, covering a very wide spectrum of temporal-spatial scales, ranging from simulation of tornado genesis (hours), Madden-Julian Oscillations (intra-seasonal), topical cyclones (seasonal), to Quasi Biennial Oscillations (intra-decadal), using the same global multi-scale modeling system.
Blanck, Oliver; Wang, Lei; Baus, Wolfgang; Grimm, Jimm; Lacornerie, Thomas; Nilsson, Joakim; Luchkovskyi, Sergii; Cano, Isabel Palazon; Shou, Zhenyu; Ayadi, Myriam; Treuer, Harald; Viard, Romain; Siebert, Frank-Andre; Chan, Mark K H; Hildebrandt, Guido; Dunst, Jürgen; Imhoff, Detlef; Wurster, Stefan; Wolff, Robert; Romanelli, Pantaleo; Lartigau, Eric; Semrau, Robert; Soltys, Scott G; Schweikard, Achim
2016-05-08
Stereotactic radiosurgery (SRS) is the accurate, conformal delivery of high-dose radiation to well-defined targets while minimizing normal structure doses via steep dose gradients. While inverse treatment planning (ITP) with computerized optimization algorithms are routine, many aspects of the planning process remain user-dependent. We performed an international, multi-institutional benchmark trial to study planning variability and to analyze preferable ITP practice for spinal robotic radiosurgery. 10 SRS treatment plans were generated for a complex-shaped spinal metastasis with 21 Gy in 3 fractions and tight constraints for spinal cord (V14Gy < 2 cc, V18Gy < 0.1 cc) and target (coverage > 95%). The resulting plans were rated on a scale from 1 to 4 (excellent-poor) in five categories (constraint compliance, optimization goals, low-dose regions, ITP complexity, and clinical acceptability) by a blinded review panel. Additionally, the plans were mathemati-cally rated based on plan indices (critical structure and target doses, conformity, monitor units, normal tissue complication probability, and treatment time) and compared to the human rankings. The treatment plans and the reviewers' rankings varied substantially among the participating centers. The average mean overall rank was 2.4 (1.2-4.0) and 8/10 plans were rated excellent in at least one category by at least one reviewer. The mathematical rankings agreed with the mean overall human rankings in 9/10 cases pointing toward the possibility for sole mathematical plan quality comparison. The final rankings revealed that a plan with a well-balanced trade-off among all planning objectives was preferred for treatment by most par-ticipants, reviewers, and the mathematical ranking system. Furthermore, this plan was generated with simple planning techniques. Our multi-institutional planning study found wide variability in ITP approaches for spinal robotic radiosurgery. The participants', reviewers', and mathematical match on preferable treatment plans and ITP techniques indicate that agreement on treatment planning and plan quality can be reached for spinal robotic radiosurgery.
NASA Astrophysics Data System (ADS)
Visayataksin, Noppharat; Sooklamai, Manon
2018-01-01
The bogie is the part that connects and transfers all the load from the vehicle body onto the railway track; interestingly the interaction between wheels and rails is the critical point for derailment of the rail vehicles. However, observing or experimenting with real bogies on rail vehicles is impossible due to the operational rules and safety concerns. Therefore, this research aimed to develop a vibration analysis set for a four-wheel railway bogie by constructing a four-wheel bogie with scale of 1:4.5. The bogie structures, including wheels and axles, were made from an aluminium alloy, equipped with springs and dampers. The bogie was driven by an electric motor using 4 round wheels instead of 2 straight rails, with linear velocity between 0 to 11.22 m/s. The data collected from the vibration analysis set was compared to the mathematical simulation model to investigate the vibration behavior of the bogie, especially the hunting motion. The results showed that vibration behavior from a scaled four-wheel railway bogie set significantly agreed with the mathematical simulation model in terms of displacement and hunting frequency. The critical speed of the wheelset was found by executing the mathematical simulation model at 13 m/s.
NASA Astrophysics Data System (ADS)
Lira, Matthew
This dissertation explores the Knowledge in Pieces (KiP) theory to account for how students learn to coordinate knowledge of mathematical and physical models in biology education. The KiP approach characterizes student knowledge as a fragmented collection of knowledge elements as opposed to stable and theory-like knowledge. This dissertation sought to use this theoretical lens to account for how students understand and learn with mathematical models and representations, such as equations. Cellular physiology provides a quantified discipline that leverages concepts from mathematics, physics, and chemistry to understand cellular functioning. Therefore, this discipline provides an exemplary context for assessing how biology students think and learn with mathematical models. In particular, the resting membrane potential provides an exemplary concept well defined by models of dynamic equilibrium borrowed from physics and chemistry. In brief, membrane potentials, or voltages, "rest" when the electrical and chemical driving forces for permeable ionic species are equal in magnitude but opposite in direction. To assess students' understandings of this concept, this dissertation employed three studies: the first study employed the cognitive clinical interview to assess student thinking in the absence and presence of equations. The second study employed an intervention to assess student learning and the affordances of an innovative assessment. The third student employed a human-computer-interaction paradigm to assess how students learn with a novel multi-representational technology. Study 1 revealed that students saw only one influence--the chemical gradient--and that students coordinated knowledge of only this gradient with the related equations. Study 2 revealed that students benefited from learning with the multi-representational technology and that the assessment detected performance gains across both calculation and explanation tasks. Last, Study 3 revealed how students shift from recognizing one influence to recognizing both the chemical and the electrical gradients as responsible for a cell's membrane potential reaching dynamic equilibrium. Together, the studies illustrate that to coordinate knowledge, students need opportunities to reflect upon relations between representations of mathematical and physical models as well as distinguish between physical quantities such as molarities for ions and transmembrane voltages.
Computational fluid dynamics modelling in cardiovascular medicine
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards ‘digital patient’ or ‘virtual physiological human’ representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019
Pattern formation in diffusive excitable systems under magnetic flow effects
NASA Astrophysics Data System (ADS)
Mvogo, Alain; Takembo, Clovis N.; Ekobena Fouda, H. P.; Kofané, Timoléon C.
2017-07-01
We study the spatiotemporal formation of patterns in a diffusive FitzHugh-Nagumo network where the effect of electromagnetic induction has been introduced in the standard mathematical model by using magnetic flux, and the modulation of magnetic flux on membrane potential is realized by using memristor coupling. We use the multi-scale expansion to show that the system equations can be reduced to a single differential-difference nonlinear equation. The linear stability analysis is performed and discussed with emphasis on the impact of magnetic flux. It is observed that the effect of memristor coupling importantly modifies the features of modulational instability. Our analytical results are supported by the numerical experiments, which reveal that the improved model can lead to nonlinear quasi-periodic spatiotemporal patterns with some features of synchronization. It is observed also the generation of pulses and rhythmics behaviors like breathing or swimming which are important in brain researches.
3D braid scaffolds for regeneration of articular cartilage.
Ahn, Hyunchul; Kim, Kyoung Ju; Park, Sook Young; Huh, Jeong Eun; Kim, Hyun Jeong; Yu, Woong-Ryeol
2014-06-01
Regenerating articular cartilage in vivo from cultured chondrocytes requires that the cells be cultured and implanted within a biocompatible, biodegradable scaffold. Such scaffolds must be mechanically stable; otherwise chondrocytes would not be supported and patients would experience severe pain. Here we report a new 3D braid scaffold that matches the anisotropic (gradient) mechanical properties of natural articular cartilage and is permissive to cell cultivation. To design an optimal structure, the scaffold unit cell was mathematically modeled and imported into finite element analysis. Based on this analysis, a 3D braid structure with gradient axial yarn distribution was designed and manufactured using a custom-built braiding machine. The mechanical properties of the 3D braid scaffold were evaluated and compared with simulated results, demonstrating that a multi-scale approach consisting of unit cell modeling and continuum analysis facilitates design of scaffolds that meet the requirements for mechanical compatibility with tissues. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multi-Scale Validation of a Nanodiamond Drug Delivery System and Multi-Scale Engineering Education
ERIC Educational Resources Information Center
Schwalbe, Michelle Kristin
2010-01-01
This dissertation has two primary concerns: (i) evaluating the uncertainty and prediction capabilities of a nanodiamond drug delivery model using Bayesian calibration and bias correction, and (ii) determining conceptual difficulties of multi-scale analysis from an engineering education perspective. A Bayesian uncertainty quantification scheme…
Determining the Supply of Material Resources for High-Rise Construction: Scenario Approach
NASA Astrophysics Data System (ADS)
Minnullina, Anna; Vasiliev, Vladimir
2018-03-01
This article presents a multi-criteria approach to determining the supply of material resources for high-rise construction under certain and uncertain conditions, which enables integrating a number of existing models into a fairly compact generalised economic and mathematical model developed for two extreme scenarios.
Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)
2001-01-01
Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are interfaced. This capability rapidly provides the high-fidelity results needed in the early design phase. Moreover, the capability is applicable to the general field of engineering science and mechanics. Hence, it provides a collaborative capability that accounts for interactions among engineering analysis methods.
Multi-level multi-task learning for modeling cross-scale interactions in nested geospatial data
Yuan, Shuai; Zhou, Jiayu; Tan, Pang-Ning; Fergus, Emi; Wagner, Tyler; Sorrano, Patricia
2017-01-01
Predictive modeling of nested geospatial data is a challenging problem as the models must take into account potential interactions among variables defined at different spatial scales. These cross-scale interactions, as they are commonly known, are particularly important to understand relationships among ecological properties at macroscales. In this paper, we present a novel, multi-level multi-task learning framework for modeling nested geospatial data in the lake ecology domain. Specifically, we consider region-specific models to predict lake water quality from multi-scaled factors. Our framework enables distinct models to be developed for each region using both its local and regional information. The framework also allows information to be shared among the region-specific models through their common set of latent factors. Such information sharing helps to create more robust models especially for regions with limited or no training data. In addition, the framework can automatically determine cross-scale interactions between the regional variables and the local variables that are nested within them. Our experimental results show that the proposed framework outperforms all the baseline methods in at least 64% of the regions for 3 out of 4 lake water quality datasets evaluated in this study. Furthermore, the latent factors can be clustered to obtain a new set of regions that is more aligned with the response variables than the original regions that were defined a priori from the ecology domain.
Scalable non-negative matrix tri-factorization.
Čopar, Andrej; Žitnik, Marinka; Zupan, Blaž
2017-01-01
Matrix factorization is a well established pattern discovery tool that has seen numerous applications in biomedical data analytics, such as gene expression co-clustering, patient stratification, and gene-disease association mining. Matrix factorization learns a latent data model that takes a data matrix and transforms it into a latent feature space enabling generalization, noise removal and feature discovery. However, factorization algorithms are numerically intensive, and hence there is a pressing challenge to scale current algorithms to work with large datasets. Our focus in this paper is matrix tri-factorization, a popular method that is not limited by the assumption of standard matrix factorization about data residing in one latent space. Matrix tri-factorization solves this by inferring a separate latent space for each dimension in a data matrix, and a latent mapping of interactions between the inferred spaces, making the approach particularly suitable for biomedical data mining. We developed a block-wise approach for latent factor learning in matrix tri-factorization. The approach partitions a data matrix into disjoint submatrices that are treated independently and fed into a parallel factorization system. An appealing property of the proposed approach is its mathematical equivalence with serial matrix tri-factorization. In a study on large biomedical datasets we show that our approach scales well on multi-processor and multi-GPU architectures. On a four-GPU system we demonstrate that our approach can be more than 100-times faster than its single-processor counterpart. A general approach for scaling non-negative matrix tri-factorization is proposed. The approach is especially useful parallel matrix factorization implemented in a multi-GPU environment. We expect the new approach will be useful in emerging procedures for latent factor analysis, notably for data integration, where many large data matrices need to be collectively factorized.
Assessing Attitudes toward Mathematics across Teacher Education Contexts
ERIC Educational Resources Information Center
Jong, Cindy; Hodges, Thomas E.
2015-01-01
This article reports on the development of attitudes toward mathematics among pre-service elementary teachers (n = 146) in relation to their experiences as K-12 learners of mathematics and experiences within a teacher education program. Using a combination of the Rasch Rating Scale Model and traditional parametric analyses, results indicate that…
ERIC Educational Resources Information Center
Tarr, James E.; Ross, Daniel J.; McNaught, Melissa D.; Chavez, Oscar; Grouws, Douglas A.; Reys, Robert E.; Sears, Ruthmae; Taylan, R. Didem
2010-01-01
The Comparing Options in Secondary Mathematics: Investigating Curriculum (COSMIC) project is a longitudinal study of student learning from two types of mathematics curricula: integrated and subject-specific. Previous large-scale research studies such as the National Assessment of Educational Progress (NAEP) indicate that numerous variables are…
ERIC Educational Resources Information Center
Mourning, Erica
2014-01-01
Economically disadvantaged students are being outperformed by their non-disadvantaged peers in middle school mathematics. This problem is evidenced by 2013 data from a national middle school mathematics assessment which revealed an achievement gap of 27 scale score points. Closing this gap is important to schools with high populations of…
NASA Astrophysics Data System (ADS)
Liu, Q.
2011-09-01
At first, research advances on radiation transfer modeling on multi-scale remote sensing data are presented: after a general overview of remote sensing radiation transfer modeling, several recent research advances are presented, including leaf spectrum model (dPROS-PECT), vegetation canopy BRDF models, directional thermal infrared emission models(TRGM, SLEC), rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed. The land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation etc. are taken as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is designed and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China will be introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.
Zhang, Zili; Gao, Chao; Lu, Yuxiao; Liu, Yuxin; Liang, Mingxin
2016-01-01
Bi-objective Traveling Salesman Problem (bTSP) is an important field in the operations research, its solutions can be widely applied in the real world. Many researches of Multi-objective Ant Colony Optimization (MOACOs) have been proposed to solve bTSPs. However, most of MOACOs suffer premature convergence. This paper proposes an optimization strategy for MOACOs by optimizing the initialization of pheromone matrix with the prior knowledge of Physarum-inspired Mathematical Model (PMM). PMM can find the shortest route between two nodes based on the positive feedback mechanism. The optimized algorithms, named as iPM-MOACOs, can enhance the pheromone in the short paths and promote the search ability of ants. A series of experiments are conducted and experimental results show that the proposed strategy can achieve a better compromise solution than the original MOACOs for solving bTSPs. PMID:26751562
Zhang, Zili; Gao, Chao; Lu, Yuxiao; Liu, Yuxin; Liang, Mingxin
2016-01-01
Bi-objective Traveling Salesman Problem (bTSP) is an important field in the operations research, its solutions can be widely applied in the real world. Many researches of Multi-objective Ant Colony Optimization (MOACOs) have been proposed to solve bTSPs. However, most of MOACOs suffer premature convergence. This paper proposes an optimization strategy for MOACOs by optimizing the initialization of pheromone matrix with the prior knowledge of Physarum-inspired Mathematical Model (PMM). PMM can find the shortest route between two nodes based on the positive feedback mechanism. The optimized algorithms, named as iPM-MOACOs, can enhance the pheromone in the short paths and promote the search ability of ants. A series of experiments are conducted and experimental results show that the proposed strategy can achieve a better compromise solution than the original MOACOs for solving bTSPs.
Supercomputers ready for use as discovery machines for neuroscience.
Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus
2012-01-01
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.
Supercomputers Ready for Use as Discovery Machines for Neuroscience
Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus
2012-01-01
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. PMID:23129998
Advancing Ecological Models to Compare Scale in Multi-Level Educational Change
ERIC Educational Resources Information Center
Woo, David James
2016-01-01
Education systems as units of analysis have been metaphorically likened to ecologies to model change. However, ecological models to date have been ineffective in modelling educational change that is multi-scale and occurs across multiple levels of an education system. Thus, this paper advances two innovative, ecological frameworks that improve on…
The Models-3 Community Multi-scale Air Quality (CMAQ) model, first released by the USEPA in 1999 (Byun and Ching. 1999), continues to be developed and evaluated. The principal components of the CMAQ system include a comprehensive emission processor known as the Sparse Matrix O...
On Multifunctional Collaborative Methods in Engineering Science
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
2001-01-01
Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized.
A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.
Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien
2017-01-01
Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.
Herrgård, Markus; Sukumara, Sumesh; Campodonico, Miguel; Zhuang, Kai
2015-12-01
In recent years, bio-based chemicals have gained interest as a renewable alternative to petrochemicals. However, there is a significant need to assess the technological, biological, economic and environmental feasibility of bio-based chemicals, particularly during the early research phase. Recently, the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain and the de novo prediction of metabolic pathways connecting existing host metabolism to desirable chemical products. This multi-scale, multi-disciplinary framework for quantitative assessment of bio-based chemicals will play a vital role in supporting engineering, strategy and policy decisions as we progress towards a sustainable chemical industry. © 2015 Authors; published by Portland Press Limited.
Modeling of Semiconductor Optical Amplifier Gain Characteristics for Amplification and Switching
NASA Astrophysics Data System (ADS)
Mahad, Farah Diana; Sahmah, Abu; Supa'at, M.; Idrus, Sevia Mahdaliza; Forsyth, David
2011-05-01
The Semiconductor Optical Amplifier (SOA) is presently commonly used as a booster or pre-amplifier in some communication networks. However, SOAs are also a strong candidate for utilization as multi-functional elements in future all-optical switching, regeneration and also wavelength conversion schemes. With this in mind, the purpose of this paper is to simulate the performance of the SOA for improved amplification and switching functions. The SOA is modeled and simulated using OptSim software. In order to verify the simulated results, a MATLAB mathematical model is also used to aid the design of the SOA. Using the model, the gain difference between simulated and mathematical results in the unsaturated region is <1dB. The mathematical analysis is in good agreement with the simulation result, with only a small offset due to inherent software limitations in matching the gain dynamics of the SOA.
Models that predict standing crop of stream fish from habitat variables: 1950-85.
K.D. Fausch; C.L. Hawkes; M.G. Parsons
1988-01-01
We reviewed mathematical models that predict standing crop of stream fish (number or biomass per unit area or length of stream) from measurable habitat variables and classified them by the types of independent habitat variables found significant, by mathematical structure, and by model quality. Habitat variables were of three types and were measured on different scales...
NASA Astrophysics Data System (ADS)
Fasni, Nurli; Fatimah, Siti; Yulanda, Syerli
2017-05-01
This research aims to achieve some purposes such as: to know whether mathematical problem solving ability of students who have learned mathematics using Multiple Intelligences based teaching model is higher than the student who have learned mathematics using cooperative learning; to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using Multiple Intelligences based teaching model., to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using cooperative learning; to know the attitude of the students to Multiple Intelligences based teaching model. The method employed here is quasi-experiment which is controlled by pre-test and post-test. The population of this research is all of VII grade in SMP Negeri 14 Bandung even-term 2013/2014, later on two classes of it were taken for the samples of this research. A class was taught using Multiple Intelligences based teaching model and the other one was taught using cooperative learning. The data of this research were gotten from the test in mathematical problem solving, scale questionnaire of the student attitudes, and observation. The results show the mathematical problem solving of the students who have learned mathematics using Multiple Intelligences based teaching model learning is higher than the student who have learned mathematics using cooperative learning, the mathematical problem solving ability of the student who have learned mathematics using cooperative learning and Multiple Intelligences based teaching model are in intermediate level, and the students showed the positive attitude in learning mathematics using Multiple Intelligences based teaching model. As for the recommendation for next author, Multiple Intelligences based teaching model can be tested on other subject and other ability.
Detection of crossover time scales in multifractal detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Ge, Erjia; Leung, Yee
2013-04-01
Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.
Evaluation of Turkish and Mathematics Curricula According to Value-Based Evaluation Model
ERIC Educational Resources Information Center
Duman, Serap Nur; Akbas, Oktay
2017-01-01
This study evaluated secondary school seventh-grade Turkish and mathematics programs using the Context-Input-Process-Product Evaluation Model based on student, teacher, and inspector views. The convergent parallel mixed method design was used in the study. Student values were identified using the scales for socio-level identification, traditional…
ERIC Educational Resources Information Center
Clements, Douglas H.
2011-01-01
The author and her colleagues' TRIAD model (Sarama, Clements, Starkey, Klein, & Wakeley, 2008), including the "Building Blocks" curriculum, have significantly and substantially increased preschooler's mathematical competence, both in previous studies (Clements & Sarama, 2008, g = 1.07) and in their present, largest implementation…
Growth trajectories of mathematics achievement: Longitudinal tracking of student academic progress.
Mok, Magdalena M C; McInerney, Dennis M; Zhu, Jinxin; Or, Anthony
2015-06-01
A number of methods to investigate growth have been reported in the literature, including hierarchical linear modelling (HLM), latent growth modelling (LGM), and multidimensional scaling applied to longitudinal profile analysis (LPAMS). This study aimed at modelling the mathematics growth of students over a span of 6 years from Grade 3 to Grade 9. The sample comprised secondary longitudinal data collected in three waves from n = 866 Hong Kong students when they were in Grade 3, Grade 6, and Grade 9. Mathematics achievement was measured thrice on a vertical scale linked with anchor items. Linear and nonlinear latent growth models were used to assess students' growth. Gender differences were also examined. A nonlinear latent growth curve with a decelerated rate had a good fit to the data. Initial achievement and growth rate were negatively correlated. No gender difference was found. Mathematics growth from Grade 6 to Grade 9 was slower than that from Grade 3 to Grade 6. Students with lower initial achievement improved at a faster rate than those who started at a higher level. Gender did not affect growth rate. © 2014 The British Psychological Society.
Components of Mathematics Anxiety: Factor Modeling of the MARS30-Brief
Pletzer, Belinda; Wood, Guilherme; Scherndl, Thomas; Kerschbaum, Hubert H.; Nuerk, Hans-Christoph
2016-01-01
Mathematics anxiety involves feelings of tension, discomfort, high arousal, and physiological reactivity interfering with number manipulation and mathematical problem solving. Several factor analytic models indicate that mathematics anxiety is rather a multidimensional than unique construct. However, the factor structure of mathematics anxiety has not been fully clarified by now. This issue shall be addressed in the current study. The Mathematics Anxiety Rating Scale (MARS) is a reliable measure of mathematics anxiety (Richardson and Suinn, 1972), for which several reduced forms have been developed. Most recently, a shortened version of the MARS (MARS30-brief) with comparable reliability was published. Different studies suggest that mathematics anxiety involves up to seven different factors. Here we examined the factor structure of the MARS30-brief by means of confirmatory factor analysis. The best model fit was obtained by a six-factor model, dismembering the known two general factors “Mathematical Test Anxiety” (MTA) and “Numerical Anxiety” (NA) in three factors each. However, a more parsimonious 5-factor model with two sub-factors for MTA and three for NA fitted the data comparably well. Factors were differentially susceptible to sex differences and differences between majors. Measurement invariance for sex was established. PMID:26924996
Components of Mathematics Anxiety: Factor Modeling of the MARS30-Brief.
Pletzer, Belinda; Wood, Guilherme; Scherndl, Thomas; Kerschbaum, Hubert H; Nuerk, Hans-Christoph
2016-01-01
Mathematics anxiety involves feelings of tension, discomfort, high arousal, and physiological reactivity interfering with number manipulation and mathematical problem solving. Several factor analytic models indicate that mathematics anxiety is rather a multidimensional than unique construct. However, the factor structure of mathematics anxiety has not been fully clarified by now. This issue shall be addressed in the current study. The Mathematics Anxiety Rating Scale (MARS) is a reliable measure of mathematics anxiety (Richardson and Suinn, 1972), for which several reduced forms have been developed. Most recently, a shortened version of the MARS (MARS30-brief) with comparable reliability was published. Different studies suggest that mathematics anxiety involves up to seven different factors. Here we examined the factor structure of the MARS30-brief by means of confirmatory factor analysis. The best model fit was obtained by a six-factor model, dismembering the known two general factors "Mathematical Test Anxiety" (MTA) and "Numerical Anxiety" (NA) in three factors each. However, a more parsimonious 5-factor model with two sub-factors for MTA and three for NA fitted the data comparably well. Factors were differentially susceptible to sex differences and differences between majors. Measurement invariance for sex was established.
NASA Astrophysics Data System (ADS)
Çiğdem Özcan, Zeynep
2016-04-01
Studies highlight that using appropriate strategies during problem solving is important to improve problem-solving skills and draw attention to the fact that using these skills is an important part of students' self-regulated learning ability. Studies on this matter view the self-regulated learning ability as key to improving problem-solving skills. The aim of this study is to investigate the relationship between mathematical problem-solving skills and the three dimensions of self-regulated learning (motivation, metacognition, and behaviour), and whether this relationship is of a predictive nature. The sample of this study consists of 323 students from two public secondary schools in Istanbul. In this study, the mathematics homework behaviour scale was administered to measure students' homework behaviours. For metacognition measurements, the mathematics metacognition skills test for students was administered to measure offline mathematical metacognitive skills, and the metacognitive experience scale was used to measure the online mathematical metacognitive experience. The internal and external motivational scales used in the Programme for International Student Assessment (PISA) test were administered to measure motivation. A hierarchic regression analysis was conducted to determine the relationship between the dependent and independent variables in the study. Based on the findings, a model was formed in which 24% of the total variance in students' mathematical problem-solving skills is explained by the three sub-dimensions of the self-regulated learning model: internal motivation (13%), willingness to do homework (7%), and post-problem retrospective metacognitive experience (4%).
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2017-04-01
Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model performance. Results were analyzed at three ranges of scales identified in the fractal analysis and confirmed in the modeling work. The sensitivity of the model to small-scale rainfall variability was discussed as well.
NASA Experimental Program to Stimulate Competitive Research: South Carolina
NASA Technical Reports Server (NTRS)
Sutton, Michael A.
2004-01-01
The use of an appropriate relationship model is critical for reliable prediction of future urban growth. Identification of proper variables and mathematic functions and determination of the weights or coefficients are the key tasks for building such a model. Although the conventional logistic regression model is appropriate for handing land use problems, it appears insufficient to address the issue of interdependency of the predictor variables. This study used an alternative approach to simulation and modeling urban growth using artificial neural networks. It developed an operational neural network model trained using a robust backpropagation method. The model was applied in the Myrtle Beach region of South Carolina, and tested with both global datasets and areal datasets to examine the strength of both regional models and areal models. The results indicate that the neural network model not only has many theoretic advantages over other conventional mathematic models in representing the complex urban systems, but also is practically superior to the logistic model in its capability to predict urban growth with better - accuracy and less variation. The neural network model is particularly effective in terms of successfully identifying urban patterns in the rural areas where the logistic model often falls short. It was also found from the area-based tests that there are significant intra-regional differentiations in urban growth with different rules and rates. This suggests that the global modeling approach, or one model for the entire region, may not be adequate for simulation of a urban growth at the regional scale. Future research should develop methods for identification and subdivision of these areas and use a set of area-based models to address the issues of multi-centered, intra- regionally differentiated urban growth.
A new instrument to measure pre-service primary teachers' attitudes to teaching mathematics
NASA Astrophysics Data System (ADS)
Nisbet, Steven
1991-06-01
This article outlines the development of an instrument to measure pre-service primary teachers' attitudes to teaching mathematics. A trial questionnaire was devised using the set of Fennema-Sherman scales on students' attitudes to the subject mathematics as a model. Analysis of the responses to the questionnaire by 155 student teachers was carried out to develop meaningful attitude scales and to refine the instrument. The end-product is a new instrument which can be used to monitor the attitudes of student teachers. The attitude scales identified in the analysis and built into the final form of the questionnaire are (i) anxiety, (ii) confidence and enjoyment, (iii) desire for recognition and (iv) pressure to conform.
NASA Astrophysics Data System (ADS)
Liu, Q.; Li, J.; Du, Y.; Wen, J.; Zhong, B.; Wang, K.
2011-12-01
As the remote sensing data accumulating, it is a challenge and significant issue how to generate high accurate and consistent land surface parameter product from the multi source remote observation and the radiation transfer modeling and inversion methodology are the theoretical bases. In this paper, recent research advances and unresolved issues are presented. At first, after a general overview, recent research advances on multi-scale remote sensing radiation transfer modeling are presented, including leaf spectrum model, vegetation canopy BRDF models, directional thermal infrared emission models, rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed, taking the land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is suggested and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China are introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.
Action detection by double hierarchical multi-structure space-time statistical matching model
NASA Astrophysics Data System (ADS)
Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang
2018-03-01
Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.
Action detection by double hierarchical multi-structure space–time statistical matching model
NASA Astrophysics Data System (ADS)
Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang
2018-06-01
Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.
Assessment of municipal solid waste settlement models based on field-scale data analysis.
Bareither, Christopher A; Kwak, Seungbok
2015-08-01
An evaluation of municipal solid waste (MSW) settlement model performance and applicability was conducted based on analysis of two field-scale datasets: (1) Yolo and (2) Deer Track Bioreactor Experiment (DTBE). Twelve MSW settlement models were considered that included a range of compression behavior (i.e., immediate compression, mechanical creep, and biocompression) and range of total (2-22) and optimized (2-7) model parameters. A multi-layer immediate settlement analysis developed for Yolo provides a framework to estimate initial waste thickness and waste thickness at the end-of-immediate compression. Model application to the Yolo test cells (conventional and bioreactor landfills) via least squares optimization yielded high coefficient of determinations for all settlement models (R(2)>0.83). However, empirical models (i.e., power creep, logarithmic, and hyperbolic models) are not recommended for use in MSW settlement modeling due to potential non-representative long-term MSW behavior, limited physical significance of model parameters, and required settlement data for model parameterization. Settlement models that combine mechanical creep and biocompression into a single mathematical function constrain time-dependent settlement to a single process with finite magnitude, which limits model applicability. Overall, all models evaluated that couple multiple compression processes (immediate, creep, and biocompression) provided accurate representations of both Yolo and DTBE datasets. A model presented in Gourc et al. (2010) included the lowest number of total and optimized model parameters and yielded high statistical performance for all model applications (R(2)⩾0.97). Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Jiangbo; Liu, Junhui; Li, Tiantian; Yin, Shuo; He, Xinhui
2018-01-01
The monthly electricity sales forecasting is a basic work to ensure the safety of the power system. This paper presented a monthly electricity sales forecasting method which comprehensively considers the coupled multi-factors of temperature, economic growth, electric power replacement and business expansion. The mathematical model is constructed by using regression method. The simulation results show that the proposed method is accurate and effective.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-09-14
Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less
Multi-scale simulations of space problems with iPIC3D
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano
The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038
2017-09-01
to develop a multi-scale model, together with relevant supporting experimental data, to describe jet fuel exacerbated noise induced hearing loss. In...scale model, together with relevant supporting experimental data, to describe jet fuel exacerbated noise-induced hearing loss. Such hearing loss...project was to develop a multi-scale model, together with relevant supporting experimental data, to describe jet fuel exacerbated NIHL. Herein we
From the experience of development of composite materials with desired properties
NASA Astrophysics Data System (ADS)
Garkina, I. A.; Danilov, A. M.
2017-04-01
Using the experience in the development of composite materials with desired properties is given the algorithm of construction materials synthesis on the basis of their representation in the form of a complex system. The possibility of creation of a composite and implementation of the technical task originally are defined at a stage of cognitive modeling. On the basis of development of the cognitive map hierarchical structures of criteria of quality are defined; according to them for each allocated large-scale level the corresponding block diagrams of system are specified. On the basis of the solution of problems of one-criteria optimization with use of the found optimum values formalization of a multi-criteria task and its decision is carried out (the optimum organization and properties of system are defined). The emphasis is on methodological aspects of mathematical modeling (construction of a generalized and partial models to optimize the properties and structure of materials, including those based on the concept of systemic homeostasis).
Estimating the Uncertain Mathematical Structure of Hydrological Model via Bayesian Data Assimilation
NASA Astrophysics Data System (ADS)
Bulygina, N.; Gupta, H.; O'Donell, G.; Wheater, H.
2008-12-01
The structure of hydrological model at macro scale (e.g. watershed) is inherently uncertain due to many factors, including the lack of a robust hydrological theory at the macro scale. In this work, we assume that a suitable conceptual model for the hydrologic system has already been determined - i.e., the system boundaries have been specified, the important state variables and input and output fluxes to be included have been selected, and the major hydrological processes and geometries of their interconnections have been identified. The structural identification problem then is to specify the mathematical form of the relationships between the inputs, state variables and outputs, so that a computational model can be constructed for making simulations and/or predictions of system input-state-output behaviour. We show how Bayesian data assimilation can be used to merge both prior beliefs in the form of pre-assumed model equations with information derived from the data to construct a posterior model. The approach, entitled Bayesian Estimation of Structure (BESt), is used to estimate a hydrological model for a small basin in England, at hourly time scales, conditioned on the assumption of 3-dimensional state - soil moisture storage, fast and slow flow stores - conceptual model structure. Inputs to the system are precipitation and potential evapotranspiration, and outputs are actual evapotranspiration and streamflow discharge. Results show the difference between prior and posterior mathematical structures, as well as provide prediction confidence intervals that reflect three types of uncertainty: due to initial conditions, due to input and due to mathematical structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Byeong M.; Wang, Ju
This paper presents the mathematical modeling and analysis of a wide bandwidth bipolar power supply for the fast correctors in the APS Upgrade. A wide bandwidth current regulator with a combined PI and phase-lead compensator has been newly proposed, analyzed, and simulated through both a mathematical model and a physical electronic circuit model using MATLAB and PLECS. The proposed regulator achieves a bandwidth with a -1.23dB attenuation and a 32.40° phase-delay at 10 kHz for a small signal less than 1% of the DC scale. The mathematical modeling and design, simulation results of a fast corrector power supply control systemmore » are presented in this paper.« less
Modeling small-scale dairy farms in central Mexico using multi-criteria programming.
Val-Arreola, D; Kebreab, E; France, J
2006-05-01
Milk supply from Mexican dairy farms does not meet demand and small-scale farms can contribute toward closing the gap. Two multi-criteria programming techniques, goal programming and compromise programming, were used in a study of small-scale dairy farms in central Mexico. To build the goal and compromise programming models, 4 ordinary linear programming models were also developed, which had objective functions to maximize metabolizable energy for milk production, to maximize margin of income over feed costs, to maximize metabolizable protein for milk production, and to minimize purchased feedstuffs. Neither multi-criteria approach was significantly better than the other; however, by applying both models it was possible to perform a more comprehensive analysis of these small-scale dairy systems. The multi-criteria programming models affirm findings from previous work and suggest that a forage strategy based on alfalfa, ryegrass, and corn silage would meet nutrient requirements of the herd. Both models suggested that there is an economic advantage in rescheduling the calving season to the second and third calendar quarters to better synchronize higher demand for nutrients with the period of high forage availability.
NASA Astrophysics Data System (ADS)
Niu, Jun; Chen, Ji; Wang, Keyi; Sivakumar, Bellie
2017-08-01
This paper examines the multi-scale streamflow variability responses to precipitation over 16 headwater catchments in the Pearl River basin, South China. The long-term daily streamflow data (1952-2000), obtained using a macro-scale hydrological model, the Variable Infiltration Capacity (VIC) model, and a routing scheme, are studied. Temporal features of streamflow variability at 10 different timescales, ranging from 6 days to 8.4 years, are revealed with the Haar wavelet transform. The principal component analysis (PCA) is performed to categorize the headwater catchments with the coherent modes of multi-scale wavelet spectra. The results indicate that three distinct modes, with different variability distributions at small timescales and seasonal scales, can explain 95% of the streamflow variability. A large majority of the catchments (i.e. 12 out of 16) exhibit consistent mode feature on multi-scale variability throughout three sub-periods (1952-1968, 1969-1984, and 1985-2000). The multi-scale streamflow variability responses to precipitation are identified to be associated with the regional flood and drought tendency over the headwater catchments in southern China.
Hankins, Catherine; Warren, Mitchell
2016-01-01
Over 11 million voluntary medical male circumcisions (VMMC) have been performed of the projected 20.3 million needed to reach 80% adult male circumcision prevalence in priority sub-Saharan African countries. Striking numbers of adolescent males, outside the 15-49-year-old age target, have been accessing VMMC services. What are the implications of overall progress in scale-up to date? Can mathematical modeling provide further insights on how to efficiently reach the male circumcision coverage levels needed to create and sustain further reductions in HIV incidence to make AIDS no longer a public health threat by 2030? Considering ease of implementation and cultural acceptability, decision makers may also value the estimates that mathematical models can generate of immediacy of impact, cost-effectiveness, and magnitude of impact resulting from different policy choices. This supplement presents the results of mathematical modeling using the Decision Makers’ Program Planning Tool Version 2.0 (DMPPT 2.0), the Actuarial Society of South Africa (ASSA2008) model, and the age structured mathematical (ASM) model. These models are helping countries examine the potential effects on program impact and cost-effectiveness of prioritizing specific subpopulations for VMMC services, for example, by client age, HIV-positive status, risk group, and geographical location. The modeling also examines long-term sustainability strategies, such as adolescent and/or early infant male circumcision, to preserve VMMC coverage gains achieved during rapid scale-up. The 2016–2021 UNAIDS strategy target for VMMC is an additional 27 million VMMC in high HIV-prevalence settings by 2020, as part of access to integrated sexual and reproductive health services for men. To achieve further scale-up, a combination of evidence, analysis, and impact estimates can usefully guide strategic planning and funding of VMMC services and related demand-creation strategies in priority countries. Mid-course corrections now can improve cost-effectiveness and scale to achieve the impact needed to help turn the HIV pandemic on its head within 15 years. PMID:27783613
Mathematical modeling of gene expression: a guide for the perplexed biologist
Ay, Ahmet; Arnosti, David N.
2011-01-01
The detailed analysis of transcriptional networks holds a key for understanding central biological processes, and interest in this field has exploded due to new large-scale data acquisition techniques. Mathematical modeling can provide essential insights, but the diversity of modeling approaches can be a daunting prospect to investigators new to this area. For those interested in beginning a transcriptional mathematical modeling project we provide here an overview of major types of models and their applications to transcriptional networks. In this discussion of recent literature on thermodynamic, Boolean and differential equation models we focus on considerations critical for choosing and validating a modeling approach that will be useful for quantitative understanding of biological systems. PMID:21417596
CFD Modeling of Flow, Temperature, and Concentration Fields in a Pilot-Scale Rotary Hearth Furnace
NASA Astrophysics Data System (ADS)
Liu, Ying; Su, Fu-Yong; Wen, Zhi; Li, Zhi; Yong, Hai-Quan; Feng, Xiao-Hong
2014-01-01
A three-dimensional mathematical model for simulation of flow, temperature, and concentration fields in a pilot-scale rotary hearth furnace (RHF) has been developed using a commercial computational fluid dynamics software, FLUENT. The layer of composite pellets under the hearth is assumed to be a porous media layer with CO source and energy sink calculated by an independent mathematical model. User-defined functions are developed and linked to FLUENT to process the reduction process of the layer of composite pellets. The standard k-ɛ turbulence model in combination with standard wall functions is used for modeling of gas flow. Turbulence-chemistry interaction is taken into account through the eddy-dissipation model. The discrete ordinates model is used for modeling of radiative heat transfer. A comparison is made between the predictions of the present model and the data from a test of the pilot-scale RHF, and a reasonable agreement is found. Finally, flow field, temperature, and CO concentration fields in the furnace are investigated by the model.
Yeo, David; Kiparissides, Alexandros; Cha, Jae Min; Aguilar-Gallardo, Cristobal; Polak, Julia M.; Tsiridis, Elefterios; Pistikopoulos, Efstratios N.; Mantalaris, Athanasios
2013-01-01
Background High proliferative and differentiation capacity renders embryonic stem cells (ESCs) a promising cell source for tissue engineering and cell-based therapies. Harnessing their potential, however, requires well-designed, efficient and reproducible expansion and differentiation protocols as well as avoiding hazardous by-products, such as teratoma formation. Traditional, standard culture methodologies are fragmented and limited in their fed-batch feeding strategies that afford a sub-optimal environment for cellular metabolism. Herein, we investigate the impact of metabolic stress as a result of inefficient feeding utilizing a novel perfusion bioreactor and a mathematical model to achieve bioprocess improvement. Methodology/Principal Findings To characterize nutritional requirements, the expansion of undifferentiated murine ESCs (mESCs) encapsulated in hydrogels was performed in batch and perfusion cultures using bioreactors. Despite sufficient nutrient and growth factor provision, the accumulation of inhibitory metabolites resulted in the unscheduled differentiation of mESCs and a decline in their cell numbers in the batch cultures. In contrast, perfusion cultures maintained metabolite concentration below toxic levels, resulting in the robust expansion (>16-fold) of high quality ‘naïve’ mESCs within 4 days. A multi-scale mathematical model describing population segregated growth kinetics, metabolism and the expression of selected pluripotency (‘stemness’) genes was implemented to maximize information from available experimental data. A global sensitivity analysis (GSA) was employed that identified significant (6/29) model parameters and enabled model validation. Predicting the preferential propagation of undifferentiated ESCs in perfusion culture conditions demonstrates synchrony between theory and experiment. Conclusions/Significance The limitations of batch culture highlight the importance of cellular metabolism in maintaining pluripotency, which necessitates the design of suitable ESC bioprocesses. We propose a novel investigational framework that integrates a novel perfusion culture platform (controlled metabolic conditions) with mathematical modeling (information maximization) to enhance ESC bioprocess productivity and facilitate bioprocess optimization. PMID:24339957
ERIC Educational Resources Information Center
George, Ann Cathrice; Robitzsch, Alexander
2018-01-01
This article presents a new perspective on measuring gender differences in the large-scale assessment study Trends in International Science Study (TIMSS). The suggested empirical model is directly based on the theoretical competence model of the domain mathematics and thus includes the interaction between content and cognitive sub-competencies.…
Tang, Jin-Yun; Riley, William J.
2017-09-05
Several land biogeochemical models used for studying carbon–climate feedbacks have begun explicitly representing microbial dynamics. However, to our knowledge, there has been no theoretical work on how to achieve a consistent scaling of the complex biogeochemical reactions from microbial individuals to populations, communities, and interactions with plants and mineral soils. We focus here on developing a mathematical formulation of the substrate–consumer relationships for consumer-mediated redox reactions of the form A + B E→ products, where products could be, e.g., microbial biomass or bioproducts. Under the quasi-steady-state approximation, these substrate–consumer relationships can be formulated as the computationally difficult full equilibrium chemistrymore » problem or approximated analytically with the dual Monod (DM) or synthesizing unit (SU) kinetics. We find that DM kinetics is scaling inconsistently for reaction networks because (1) substrate limitations are not considered, (2) contradictory assumptions are made regarding the substrate processing rate when transitioning from single- to multi-substrate redox reactions, and (3) the product generation rate cannot be scaled from one to multiple substrates. In contrast, SU kinetics consistently scales the product generation rate from one to multiple substrates but predicts unrealistic results as consumer abundances reach large values with respect to their substrates. We attribute this deficit to SU's failure to incorporate substrate limitation in its derivation. To address these issues, we propose SUPECA (SU plus the equilibrium chemistry approximation – ECA) kinetics, which consistently imposes substrate and consumer mass balance constraints. We show that SUPECA kinetics satisfies the partition principle, i.e., scaling invariance across a network of an arbitrary number of reactions (e.g., as in Newton's law of motion and Dalton's law of partial pressures). We tested SUPECA kinetics with the equilibrium chemistry solution for some simple problems and found SUPECA outperformed SU kinetics. As an example application, we show that a steady-state SUPECA-based approach predicted an aerobic soil respiration moisture response function that agreed well with laboratory observations. We conclude that, as an extension to SU and ECA kinetics, SUPECA provides a robust mathematical representation of complex soil substrate–consumer interactions and can be applied to improve Earth system model (ESM) land models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Jin-Yun; Riley, William J.
Several land biogeochemical models used for studying carbon–climate feedbacks have begun explicitly representing microbial dynamics. However, to our knowledge, there has been no theoretical work on how to achieve a consistent scaling of the complex biogeochemical reactions from microbial individuals to populations, communities, and interactions with plants and mineral soils. We focus here on developing a mathematical formulation of the substrate–consumer relationships for consumer-mediated redox reactions of the form A + B E→ products, where products could be, e.g., microbial biomass or bioproducts. Under the quasi-steady-state approximation, these substrate–consumer relationships can be formulated as the computationally difficult full equilibrium chemistrymore » problem or approximated analytically with the dual Monod (DM) or synthesizing unit (SU) kinetics. We find that DM kinetics is scaling inconsistently for reaction networks because (1) substrate limitations are not considered, (2) contradictory assumptions are made regarding the substrate processing rate when transitioning from single- to multi-substrate redox reactions, and (3) the product generation rate cannot be scaled from one to multiple substrates. In contrast, SU kinetics consistently scales the product generation rate from one to multiple substrates but predicts unrealistic results as consumer abundances reach large values with respect to their substrates. We attribute this deficit to SU's failure to incorporate substrate limitation in its derivation. To address these issues, we propose SUPECA (SU plus the equilibrium chemistry approximation – ECA) kinetics, which consistently imposes substrate and consumer mass balance constraints. We show that SUPECA kinetics satisfies the partition principle, i.e., scaling invariance across a network of an arbitrary number of reactions (e.g., as in Newton's law of motion and Dalton's law of partial pressures). We tested SUPECA kinetics with the equilibrium chemistry solution for some simple problems and found SUPECA outperformed SU kinetics. As an example application, we show that a steady-state SUPECA-based approach predicted an aerobic soil respiration moisture response function that agreed well with laboratory observations. We conclude that, as an extension to SU and ECA kinetics, SUPECA provides a robust mathematical representation of complex soil substrate–consumer interactions and can be applied to improve Earth system model (ESM) land models.« less
NASA Astrophysics Data System (ADS)
Tang, Jin-Yun; Riley, William J.
2017-09-01
Several land biogeochemical models used for studying carbon-climate feedbacks have begun explicitly representing microbial dynamics. However, to our knowledge, there has been no theoretical work on how to achieve a consistent scaling of the complex biogeochemical reactions from microbial individuals to populations, communities, and interactions with plants and mineral soils. We focus here on developing a mathematical formulation of the substrate-consumer relationships for consumer-mediated redox reactions of the form A + BE→ products, where products could be, e.g., microbial biomass or bioproducts. Under the quasi-steady-state approximation, these substrate-consumer relationships can be formulated as the computationally difficult full equilibrium chemistry problem or approximated analytically with the dual Monod (DM) or synthesizing unit (SU) kinetics. We find that DM kinetics is scaling inconsistently for reaction networks because (1) substrate limitations are not considered, (2) contradictory assumptions are made regarding the substrate processing rate when transitioning from single- to multi-substrate redox reactions, and (3) the product generation rate cannot be scaled from one to multiple substrates. In contrast, SU kinetics consistently scales the product generation rate from one to multiple substrates but predicts unrealistic results as consumer abundances reach large values with respect to their substrates. We attribute this deficit to SU's failure to incorporate substrate limitation in its derivation. To address these issues, we propose SUPECA (SU plus the equilibrium chemistry approximation - ECA) kinetics, which consistently imposes substrate and consumer mass balance constraints. We show that SUPECA kinetics satisfies the partition principle, i.e., scaling invariance across a network of an arbitrary number of reactions (e.g., as in Newton's law of motion and Dalton's law of partial pressures). We tested SUPECA kinetics with the equilibrium chemistry solution for some simple problems and found SUPECA outperformed SU kinetics. As an example application, we show that a steady-state SUPECA-based approach predicted an aerobic soil respiration moisture response function that agreed well with laboratory observations. We conclude that, as an extension to SU and ECA kinetics, SUPECA provides a robust mathematical representation of complex soil substrate-consumer interactions and can be applied to improve Earth system model (ESM) land models.
Ni, Bing-Jie; Yuan, Zhiguo
2015-12-15
Nitrous oxide (N2O) can be emitted from wastewater treatment contributing to its greenhouse gas footprint significantly. Mathematical modeling of N2O emissions is of great importance toward the understanding and reduction of the environmental impact of wastewater treatment systems. This article reviews the current status of the modeling of N2O emissions from wastewater treatment. The existing mathematical models describing all the known microbial pathways for N2O production are reviewed and discussed. These included N2O production by ammonia-oxidizing bacteria (AOB) through the hydroxylamine oxidation pathway and the AOB denitrification pathway, N2O production by heterotrophic denitrifiers through the denitrification pathway, and the integration of these pathways in single N2O models. The calibration and validation of these models using lab-scale and full-scale experimental data is also reviewed. We conclude that the mathematical modeling of N2O production, while is still being enhanced supported by new knowledge development, has reached a maturity that facilitates the estimation of site-specific N2O emissions and the development of mitigation strategies for a wastewater treatment plant taking into the specific design and operational conditions of the plant. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mathematical and physical modeling of thermal stratification phenomena in steel ladles
NASA Astrophysics Data System (ADS)
Putan, V.; Vilceanu, L.; Socalici, A.; Putan, A.
2018-01-01
By means of CFD numerical modeling, a systematic analysis of the similarity between steel ladles and hot-water model regarding natural convection phenomena was studied. The key similarity criteria we found to be dependent on the dimensionless numbers Fr and βΔT. These similarity criteria suggested that hot-water models with scale in the range between 1/5 and 1/3 and using hot water with temperature of 45 °C or higher are appropriate for simulating natural convection in steel ladles. With this physical model, thermal stratification phenomena due to natural convection in steel ladles were investigated. By controlling the cooling intensity of water model to correspond to the heat loss rate of steel ladles, which is governed by Fr and βΔT, the temperature profiles measured in the water bath of the model were to deduce the extent of thermal stratification in liquid steel bath in the ladles. Comparisons between mathematically simulated temperature profiles in the prototype steel ladles and those physically simulated by scaling-up the measured temperatures profiles in the water model showed good agreement. This proved that it is feasible to use a 1/5 scale water model with 45 °C hot water to simulate natural convection in steel ladles. Therefore, besides mathematical CFD models, the physical hot-water model provided an additional means of studying fluid flow and heat transfer in steel ladles.
Health Risk of Exposure to Atmospheric Pollutant Particles
In relation to multi-component mixture nature of atmospheric PM, this presentation will discuss methods for estimating the respiratory internal dose by experiment and mathematical modeling, limitations of each method and interpretations of the results in the context of health ris...
Voluntary EMG-to-force estimation with a multi-scale physiological muscle model
2013-01-01
Background EMG-to-force estimation based on muscle models, for voluntary contraction has many applications in human motion analysis. The so-called Hill model is recognized as a standard model for this practical use. However, it is a phenomenological model whereby muscle activation, force-length and force-velocity properties are considered independently. Perreault reported Hill modeling errors were large for different firing frequencies, level of activation and speed of contraction. It may be due to the lack of coupling between activation and force-velocity properties. In this paper, we discuss EMG-force estimation with a multi-scale physiology based model, which has a link to underlying crossbridge dynamics. Differently from the Hill model, the proposed method provides dual dynamics of recruitment and calcium activation. Methods The ankle torque was measured for the plantar flexion along with EMG measurements of the medial gastrocnemius (GAS) and soleus (SOL). In addition to Hill representation of the passive elements, three models of the contractile parts have been compared. Using common EMG signals during isometric contraction in four able-bodied subjects, torque was estimated by the linear Hill model, the nonlinear Hill model and the multi-scale physiological model that refers to Huxley theory. The comparison was made in normalized scale versus the case in maximum voluntary contraction. Results The estimation results obtained with the multi-scale model showed the best performances both in fast-short and slow-long term contraction in randomized tests for all the four subjects. The RMS errors were improved with the nonlinear Hill model compared to linear Hill, however it showed limitations to account for the different speed of contractions. Average error was 16.9% with the linear Hill model, 9.3% with the modified Hill model. In contrast, the error in the multi-scale model was 6.1% while maintaining a uniform estimation performance in both fast and slow contractions schemes. Conclusions We introduced a novel approach that allows EMG-force estimation based on a multi-scale physiology model integrating Hill approach for the passive elements and microscopic cross-bridge representations for the contractile element. The experimental evaluation highlights estimation improvements especially a larger range of contraction conditions with integration of the neural activation frequency property and force-velocity relationship through cross-bridge dynamics consideration. PMID:24007560
ERIC Educational Resources Information Center
Donabella, Mark A.; Rule, Audrey C.
2008-01-01
This article describes the positive impact of Montessori manipulative materials on four seventh grade students who qualified for academic intervention services because of previous low state test scores in mathematics. This mathematics technique for teaching multi-digit multiplication uses a placemat-sized quilt with different color-coded squares…
Stand-alone hybrid wind-photovoltaic power generation systems optimal sizing
NASA Astrophysics Data System (ADS)
Crǎciunescu, Aurelian; Popescu, Claudia; Popescu, Mihai; Florea, Leonard Marin
2013-10-01
Wind and photovoltaic energy resources have attracted energy sectors to generate power on a large scale. A drawback, common to these options, is their unpredictable nature and dependence on day time and meteorological conditions. Fortunately, the problems caused by the variable nature of these resources can be partially overcome by integrating the two resources in proper combination, using the strengths of one source to overcome the weakness of the other. The hybrid systems that combine wind and solar generating units with battery backup can attenuate their individual fluctuations and can match with the power requirements of the beneficiaries. In order to efficiently and economically utilize the hybrid energy system, one optimum match design sizing method is necessary. In this way, literature offers a variety of methods for multi-objective optimal designing of hybrid wind/photovoltaic (WG/PV) generating systems, one of the last being genetic algorithms (GA) and particle swarm optimization (PSO). In this paper, mathematical models of hybrid WG/PV components and a short description of the last proposed multi-objective optimization algorithms are given.
A thermal scale modeling study for Apollo and Apollo applications, volume 2
NASA Technical Reports Server (NTRS)
Shannon, R. L.
1972-01-01
The development and demonstration of practical thermal scale modeling techniques applicable to systems involving radiation, conduction, and convection with emphasis on cabin atmosphere/cabin wall thermal interface are discussed. The Apollo spacecraft environment is used as the model. Four possible scaling techniques were considered: (1) modified material preservation, (2) temperature preservation, (3) scaling compromises, and Nusselt number preservation. A thermal mathematical model was developed for use with the Nusselt number preservation technique.
ERIC Educational Resources Information Center
Taljaard, Johann
2016-01-01
This article reviews the literature on multi-sensory technology and, in particular, looks at answering the question: "What multi-sensory technologies are available to use in a science, technology, engineering, arts and mathematics (STEAM) classroom, and do they affect student engagement and learning outcomes?" Here engagement is defined…
Multi-scale Modeling and Analysis of Nano-RFID Systems on HPC Setup
NASA Astrophysics Data System (ADS)
Pathak, Rohit; Joshi, Satyadhar
In this paper we have worked out on some the complex modeling aspects such as Multi Scale modeling, MATLAB Sugar based modeling and have shown the complexities involved in the analysis of Nano RFID (Radio Frequency Identification) systems. We have shown the modeling and simulation and demonstrated some novel ideas and library development for Nano RFID. Multi scale modeling plays a very important role in nanotech enabled devices properties of which cannot be explained sometimes by abstraction level theories. Reliability and packaging still remains one the major hindrances in practical implementation of Nano RFID based devices. And to work on them modeling and simulation will play a very important role. CNTs is the future low power material that will replace CMOS and its integration with CMOS, MEMS circuitry will play an important role in realizing the true power in Nano RFID systems. RFID based on innovations in nanotechnology has been shown. MEMS modeling of Antenna, sensors and its integration in the circuitry has been shown. Thus incorporating this we can design a Nano-RFID which can be used in areas like human implantation and complex banking applications. We have proposed modeling of RFID using the concept of multi scale modeling to accurately predict its properties. Also we give the modeling of MEMS devices that are proposed recently that can see possible application in RFID. We have also covered the applications and the advantages of Nano RFID in various areas. RF MEMS has been matured and its devices are being successfully commercialized but taking it to limits of nano domains and integration with singly chip RFID needs a novel approach which is being proposed. We have modeled MEMS based transponder and shown the distribution for multi scale modeling for Nano RFID.
Cold-Cap Temperature Profile Comparison between the Laboratory and Mathematical Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, Derek R.; Schweiger, Michael J.; Riley, Brian J.
2015-06-01
The rate of waste vitrification in an electric melter is connected to the feed-to-glass conversion process, which occurs in the cold cap, a layer of reacting feed on top of molten glass. The cold cap consists of two layers: a low temperature (~100°C – ~800°C) region of unconnected feed and a high temperature (~800°C – ~1100°C) region of foam with gas bubbles and cavities mixed in the connected glass melt. A recently developed mathematical model describes the effect of the cold cap on glass production. For verification of the mathematical model, a laboratory-scale melter was used to produce a coldmore » cap that could be cross-sectioned and polished in order to determine the temperature profile related to position in the cold cap. The cold cap from the laboratory-scale melter exhibited an accumulation of feed ~400°C due to radiant heat from the molten glass creating dry feed conditions in the melter, which was not the case in the mathematical model where wet feed conditions were calculated. Through the temperature range from ~500°C – ~1100°C, there was good agreement between the model and the laboratory cold cap. Differences were observed between the two temperature profiles due to the temperature of the glass melts and the lack of secondary foam, large cavities, and shrinkage of the primary foam bubbles upon the cooling of the laboratory-scale cold cap.« less
MultiMetEval: Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models
Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko
2012-01-01
Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the context of multiple cellular objectives. Here, we present the user-friendly software framework Multi-Metabolic Evaluator (MultiMetEval), built upon SurreyFBA, which allows the user to compose collections of metabolic models that together can be subjected to flux balance analysis. Additionally, MultiMetEval implements functionalities for multi-objective analysis by calculating the Pareto front between two cellular objectives. Using a previously generated dataset of 38 actinobacterial genome-scale metabolic models, we show how these approaches can lead to exciting novel insights. Firstly, after incorporating several pathways for the biosynthesis of natural products into each of these models, comparative flux balance analysis predicted that species like Streptomyces that harbour the highest diversity of secondary metabolite biosynthetic gene clusters in their genomes do not necessarily have the metabolic network topology most suitable for compound overproduction. Secondly, multi-objective analysis of biomass production and natural product biosynthesis in these actinobacteria shows that the well-studied occurrence of discrete metabolic switches during the change of cellular objectives is inherent to their metabolic network architecture. Comparative and multi-objective modelling can lead to insights that could not be obtained by normal flux balance analyses. MultiMetEval provides a powerful platform that makes these analyses straightforward for biologists. Sources and binaries of MultiMetEval are freely available from https://github.com/PiotrZakrzewski/MetEval/downloads. PMID:23272111
eDNAoccupancy: An R package for multi-scale occupancy modeling of environmental DNA data
Dorazio, Robert; Erickson, Richard A.
2017-01-01
In this article we describe eDNAoccupancy, an R package for fitting Bayesian, multi-scale occupancy models. These models are appropriate for occupancy surveys that include three, nested levels of sampling: primary sample units within a study area, secondary sample units collected from each primary unit, and replicates of each secondary sample unit. This design is commonly used in occupancy surveys of environmental DNA (eDNA). eDNAoccupancy allows users to specify and fit multi-scale occupancy models with or without covariates, to estimate posterior summaries of occurrence and detection probabilities, and to compare different models using Bayesian model-selection criteria. We illustrate these features by analyzing two published data sets: eDNA surveys of a fungal pathogen of amphibians and eDNA surveys of an endangered fish species.
NASA Astrophysics Data System (ADS)
Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel
2017-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.
Scale effect challenges in urban hydrology highlighted with a distributed hydrological model
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2018-01-01
Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration
by innovative methods of model resolution alteration
based on the spatial data variability and scaling of flows in urban hydrology.
Wang, Qi; Xie, Zhiyi; Li, Fangbai
2015-11-01
This study aims to identify and apportion multi-source and multi-phase heavy metal pollution from natural and anthropogenic inputs using ensemble models that include stochastic gradient boosting (SGB) and random forest (RF) in agricultural soils on the local scale. The heavy metal pollution sources were quantitatively assessed, and the results illustrated the suitability of the ensemble models for the assessment of multi-source and multi-phase heavy metal pollution in agricultural soils on the local scale. The results of SGB and RF consistently demonstrated that anthropogenic sources contributed the most to the concentrations of Pb and Cd in agricultural soils in the study region and that SGB performed better than RF. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, H.
2016-12-01
Precipitation is one of the most important climate variables that are taken into account in studying regional climate. Nevertheless, how precipitation will respond to a changing climate and even its mean state in the current climate are not well represented in regional climate models (RCMs). Hence, comprehensive and mathematically rigorous methodologies to evaluate precipitation and related variables in multiple RCMs are required. The main objective of the current study is to evaluate the joint variability of climate variables related to model performance in simulating precipitation and condense multiple evaluation metrics into a single summary score. We use multi-objective optimization, a mathematical process that provides a set of optimal tradeoff solutions based on a range of evaluation metrics, to characterize the joint representation of precipitation, cloudiness and insolation in RCMs participating in the North American Regional Climate Change Assessment Program (NARCCAP) and Coordinated Regional Climate Downscaling Experiment-North America (CORDEX-NA). We also leverage ground observations, NASA satellite data and the Regional Climate Model Evaluation System (RCMES). Overall, the quantitative comparison of joint probability density functions between the three variables indicates that performance of each model differs markedly between sub-regions and also shows strong seasonal dependence. Because of the large variability across the models, it is important to evaluate models systematically and make future projections using only models showing relatively good performance. Our results indicate that the optimized multi-model ensemble always shows better performance than the arithmetic ensemble mean and may guide reliable future projections.
Computational fluid dynamics modelling in cardiovascular medicine.
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Modelling an industrial anaerobic granular reactor using a multi-scale approach.
Feldman, H; Flores-Alsina, X; Ramin, P; Kjellberg, K; Jeppsson, U; Batstone, D J; Gernaey, K V
2017-12-01
The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark Simulation Model No 2 (BSM2) influent generator. All models are tested using two plant data sets corresponding to different operational periods (#D1, #D2). Simulation results reveal that the proposed approach can satisfactorily describe the transformation of organics, nutrients and minerals, the production of methane, carbon dioxide and sulfide and the potential formation of precipitates within the bulk (average deviation between computer simulations and measurements for both #D1, #D2 is around 10%). Model predictions suggest a stratified structure within the granule which is the result of: 1) applied loading rates, 2) mass transfer limitations and 3) specific (bacterial) affinity for substrate. Hence, inerts (X I ) and methanogens (X ac ) are situated in the inner zone, and this fraction lowers as the radius increases favouring the presence of acidogens (X su ,X aa , X fa ) and acetogens (X c4 ,X pro ). Additional simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally, the possibilities and opportunities offered by the proposed approach for conducting engineering optimization projects are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Predicting introductory programming performance: A multi-institutional multivariate study
NASA Astrophysics Data System (ADS)
Bergin, Susan; Reilly, Ronan
2006-12-01
A model for predicting student performance on introductory programming modules is presented. The model uses attributes identified in a study carried out at four third-level institutions in the Republic of Ireland. Four instruments were used to collect the data and over 25 attributes were examined. A data reduction technique was applied and a logistic regression model using 10-fold stratified cross validation was developed. The model used three attributes: Leaving Certificate Mathematics result (final mathematics examination at second level), number of hours playing computer games while taking the module and programming self-esteem. Prediction success was significant with 80% of students correctly classified. The model also works well on a per-institution level. A discussion on the implications of the model is provided and future work is outlined.
Computer-aided decision making.
Keith M. Reynolds; Daniel L. Schmoldt
2006-01-01
Several major classes of software technologies have been used in decisionmaking for forest management applications over the past few decades. These computer-based technologies include mathematical programming, expert systems, network models, multi-criteria decisionmaking, and integrated systems. Each technology possesses unique advantages and disadvantages, and has...
The trend of the multi-scale temporal variability of precipitation in Colorado River Basin
NASA Astrophysics Data System (ADS)
Jiang, P.; Yu, Z.
2011-12-01
Hydrological problems like estimation of flood and drought frequencies under future climate change are not well addressed as a result of the disability of current climate models to provide reliable prediction (especially for precipitation) shorter than 1 month. In order to assess the possible impacts that multi-scale temporal distribution of precipitation may have on the hydrological processes in Colorado River Basin (CRB), a comparative analysis of multi-scale temporal variability of precipitation as well as the trend of extreme precipitation is conducted in four regions controlled by different climate systems. Multi-scale precipitation variability including within-storm patterns and intra-annual, inter-annual and decadal variabilities will be analyzed to explore the possible trends of storm durations, inter-storm periods, average storm precipitation intensities and extremes under both long-term natural climate variability and human-induced warming. Further more, we will examine the ability of current climate models to simulate the multi-scale temporal variability and extremes of precipitation. On the basis of these analyses, a statistical downscaling method will be developed to disaggregate the future precipitation scenarios which will provide a more reliable and finer temporal scale precipitation time series for hydrological modeling. Analysis results and downscaling results will be presented.
Multi-Scale Computational Modeling of Two-Phased Metal Using GMC Method
NASA Technical Reports Server (NTRS)
Moghaddam, Masoud Ghorbani; Achuthan, A.; Bednacyk, B. A.; Arnold, S. M.; Pineda, E. J.
2014-01-01
A multi-scale computational model for determining plastic behavior in two-phased CMSX-4 Ni-based superalloys is developed on a finite element analysis (FEA) framework employing crystal plasticity constitutive model that can capture the microstructural scale stress field. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, GMC as stand-alone is validated by analyzing a repeating unit cell (RUC) as a two-phased sample with 72.9% volume fraction of gamma'-precipitate in the gamma-matrix phase and comparing the results with those predicted by finite element analysis (FEA) models incorporating the same crystal plasticity constitutive model. The global stress-strain behavior and the local field quantity distributions predicted by GMC demonstrated good agreement with FEA. High computational saving, at the expense of some accuracy in the components of local tensor field quantities, was obtained with GMC. Finally, the capability of the developed multi-scale model linking FEA and GMC to solve real life sized structures is demonstrated by analyzing an engine disc component and determining the microstructural scale details of the field quantities.
Constructing rigorous and broad biosurveillance networks for detecting emerging zoonotic outbreaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Mac; Moore, Leslie; McMahon, Benjamin
Determining optimal surveillance networks for an emerging pathogen is difficult since it is not known beforehand what the characteristics of a pathogen will be or where it will emerge. The resources for surveillance of infectious diseases in animals and wildlife are often limited and mathematical modeling can play a supporting role in examining a wide range of scenarios of pathogen spread. We demonstrate how a hierarchy of mathematical and statistical tools can be used in surveillance planning help guide successful surveillance and mitigation policies for a wide range of zoonotic pathogens. The model forecasts can help clarify the complexities ofmore » potential scenarios, and optimize biosurveillance programs for rapidly detecting infectious diseases. Using the highly pathogenic zoonotic H5N1 avian influenza 2006-2007 epidemic in Nigeria as an example, we determined the risk for infection for localized areas in an outbreak and designed biosurveillance stations that are effective for different pathogen strains and a range of possible outbreak locations. We created a general multi-scale, multi-host stochastic SEIR epidemiological network model, with both short and long-range movement, to simulate the spread of an infectious disease through Nigerian human, poultry, backyard duck, and wild bird populations. We chose parameter ranges specific to avian influenza (but not to a particular strain) and used a Latin hypercube sample experimental design to investigate epidemic predictions in a thousand simulations. We ranked the risk of local regions by the number of times they became infected in the ensemble of simulations. These spatial statistics were then complied into a potential risk map of infection. Finally, we validated the results with a known outbreak, using spatial analysis of all the simulation runs to show the progression matched closely with the observed location of the farms infected in the 2006-2007 epidemic.« less
Constructing rigorous and broad biosurveillance networks for detecting emerging zoonotic outbreaks
Brown, Mac; Moore, Leslie; McMahon, Benjamin; ...
2015-05-06
Determining optimal surveillance networks for an emerging pathogen is difficult since it is not known beforehand what the characteristics of a pathogen will be or where it will emerge. The resources for surveillance of infectious diseases in animals and wildlife are often limited and mathematical modeling can play a supporting role in examining a wide range of scenarios of pathogen spread. We demonstrate how a hierarchy of mathematical and statistical tools can be used in surveillance planning help guide successful surveillance and mitigation policies for a wide range of zoonotic pathogens. The model forecasts can help clarify the complexities ofmore » potential scenarios, and optimize biosurveillance programs for rapidly detecting infectious diseases. Using the highly pathogenic zoonotic H5N1 avian influenza 2006-2007 epidemic in Nigeria as an example, we determined the risk for infection for localized areas in an outbreak and designed biosurveillance stations that are effective for different pathogen strains and a range of possible outbreak locations. We created a general multi-scale, multi-host stochastic SEIR epidemiological network model, with both short and long-range movement, to simulate the spread of an infectious disease through Nigerian human, poultry, backyard duck, and wild bird populations. We chose parameter ranges specific to avian influenza (but not to a particular strain) and used a Latin hypercube sample experimental design to investigate epidemic predictions in a thousand simulations. We ranked the risk of local regions by the number of times they became infected in the ensemble of simulations. These spatial statistics were then complied into a potential risk map of infection. Finally, we validated the results with a known outbreak, using spatial analysis of all the simulation runs to show the progression matched closely with the observed location of the farms infected in the 2006-2007 epidemic.« less
Sustaining GHz oscillation of carbon nanotube based oscillators via a MHz frequency excitation
NASA Astrophysics Data System (ADS)
Motevalli, Benyamin; Taherifar, Neda; Zhe Liu, Jefferson
2016-05-01
There have been intensive studies to investigate the properties of gigahertz nano-oscillators based on multi-walled carbon nanotubes (MWCNTs). Many of these studies, however, revealed that the unique telescopic translational oscillations in such devices would damp quickly due to various energy dissipation mechanisms. This challenge remains the primary obstacle against its practical applications. Herein, we propose a design concept in which a GHz oscillation could be re-excited by a MHz mechanical motion. This design involves a triple-walled CNT, in which sliding of the longer inner tube at a MHz frequency can re-excite and sustain a GHz oscillation of the shorter middle tube. Our molecular dynamics (MD) simulations prove this design concept at ˜10 nm scale. A mathematical model is developed to explore the feasibility at a larger size scale. As an example, in an oscillatory system with the CNT’s length above 100 nm, the high oscillatory frequency range of 1.8-3.3 GHz could be excited by moving the inner tube at a much lower frequency of 53.4 MHz. This design concept together with the mechanical model could energize the development of GHz nano-oscillators in miniaturized electro-mechanical devices.
A strain energy filter for 3D vessel enhancement with application to pulmonary CT images.
Xiao, Changyan; Staring, Marius; Shamonin, Denis; Reiber, Johan H C; Stolk, Jan; Stoel, Berend C
2011-02-01
The traditional Hessian-related vessel filters often suffer from detecting complex structures like bifurcations due to an over-simplified cylindrical model. To solve this problem, we present a shape-tuned strain energy density function to measure vessel likelihood in 3D medical images. This method is initially inspired by established stress-strain principles in mechanics. By considering the Hessian matrix as a stress tensor, the three invariants from orthogonal tensor decomposition are used independently or combined to formulate distinctive functions for vascular shape discrimination, brightness contrast and structure strength measuring. Moreover, a mathematical description of Hessian eigenvalues for general vessel shapes is obtained, based on an intensity continuity assumption, and a relative Hessian strength term is presented to ensure the dominance of second-order derivatives as well as suppress undesired step-edges. Finally, we adopt the multi-scale scheme to find an optimal solution through scale space. The proposed method is validated in experiments with a digital phantom and non-contrast-enhanced pulmonary CT data. It is shown that our model performed more effectively in enhancing vessel bifurcations and preserving details, compared to three existing filters. Copyright © 2010 Elsevier B.V. All rights reserved.
Vakalis, Stergios; Patuzzi, Francesco; Baratieri, Marco
2016-04-01
Modeling can be a powerful tool for designing and optimizing gasification systems. Modeling applications for small scale/fixed bed biomass gasifiers have been interesting due to their increased commercial practices. Fixed bed gasifiers are characterized by a wide range of operational conditions and are multi-zoned processes. The reactants are distributed in different phases and the products from each zone influence the following process steps and thus the composition of the final products. The present study aims to improve the conventional 'Black-Box' thermodynamic modeling by means of developing multiple intermediate 'boxes' that calculate two phase (solid-vapor) equilibriums in small scale gasifiers. Therefore the model is named ''Multi-Box''. Experimental data from a small scale gasifier have been used for the validation of the model. The returned results are significantly closer with the actual case study measurements in comparison to single-stage thermodynamic modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.
The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staebler, G. M.; Candy, J.; Howard, N. T.
2016-06-15
The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. The zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ion-scale gyrokinetic simulations.« less
Quantifying Astronaut Tasks: Robotic Technology and Future Space Suit Design
NASA Technical Reports Server (NTRS)
Newman, Dava
2003-01-01
The primary aim of this research effort was to advance the current understanding of astronauts' capabilities and limitations in space-suited EVA by developing models of the constitutive and compatibility relations of a space suit, based on experimental data gained from human test subjects as well as a 12 degree-of-freedom human-sized robot, and utilizing these fundamental relations to estimate a human factors performance metric for space suited EVA work. The three specific objectives are to: 1) Compile a detailed database of torques required to bend the joints of a space suit, using realistic, multi- joint human motions. 2) Develop a mathematical model of the constitutive relations between space suit joint torques and joint angular positions, based on experimental data and compare other investigators' physics-based models to experimental data. 3) Estimate the work envelope of a space suited astronaut, using the constitutive and compatibility relations of the space suit. The body of work that makes up this report includes experimentation, empirical and physics-based modeling, and model applications. A detailed space suit joint torque-angle database was compiled with a novel experimental approach that used space-suited human test subjects to generate realistic, multi-joint motions and an instrumented robot to measure the torques required to accomplish these motions in a space suit. Based on the experimental data, a mathematical model is developed to predict joint torque from the joint angle history. Two physics-based models of pressurized fabric cylinder bending are compared to experimental data, yielding design insights. The mathematical model is applied to EVA operations in an inverse kinematic analysis coupled to the space suit model to calculate the volume in which space-suited astronauts can work with their hands, demonstrating that operational human factors metrics can be predicted from fundamental space suit information.
NASA Astrophysics Data System (ADS)
Khalilpourazari, Soheyl; Khalilpourazary, Saman
2017-05-01
In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.
2016-07-15
AFRL-AFOSR-JP-TR-2016-0068 Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing Hean-Teik...SUBTITLE Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing 5a. CONTRACT NUMBER 5b. GRANT NUMBER... electromagnetics to the application in microwave remote sensing as well as extension of modelling capability with computational flexibility to study
2016-07-15
AFRL-AFOSR-JP-TR-2016-0068 Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing Hean-Teik...SUBTITLE Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing 5a. CONTRACT NUMBER 5b. GRANT NUMBER...electromagnetics to the application in microwave remote sensing as well as extension of modelling capability with computational flexibility to study
FLBEIA : A simulation model to conduct Bio-Economic evaluation of fisheries management strategies
NASA Astrophysics Data System (ADS)
Garcia, Dorleta; Sánchez, Sonia; Prellezo, Raúl; Urtizberea, Agurtzane; Andrés, Marga
Fishery systems are complex systems that need to be managed in order to ensure a sustainable and efficient exploitation of marine resources. Traditionally, fisheries management has relied on biological models. However, in recent years the focus on mathematical models which incorporate economic and social aspects has increased. Here, we present FLBEIA, a flexible software to conduct bio-economic evaluation of fisheries management strategies. The model is multi-stock, multi-fleet, stochastic and seasonal. The fishery system is described as a sum of processes, which are internally assembled in a predetermined way. There are several functions available to describe the dynamic of each process and new functions can be added to satisfy specific requirements.
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
Modeling evolution of the mind and cultures: emotional Sapir-Whorf hypothesis
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid I.
2009-05-01
Evolution of cultures is ultimately determined by mechanisms of the human mind. The paper discusses the mechanisms of evolution of language from primordial undifferentiated animal cries to contemporary conceptual contents. In parallel with differentiation of conceptual contents, the conceptual contents were differentiated from emotional contents of languages. The paper suggests the neural brain mechanisms involved in these processes. Experimental evidence and theoretical arguments are discussed, including mathematical approaches to cognition and language: modeling fields theory, the knowledge instinct, and the dual model connecting language and cognition. Mathematical results are related to cognitive science, linguistics, and psychology. The paper gives an initial mathematical formulation and mean-field equations for the hierarchical dynamics of both the human mind and culture. In the mind heterarchy operation of the knowledge instinct manifests through mechanisms of differentiation and synthesis. The emotional contents of language are related to language grammar. The conclusion is an emotional version of Sapir-Whorf hypothesis. Cultural advantages of "conceptual" pragmatic cultures, in which emotionality of language is diminished and differentiation overtakes synthesis resulting in fast evolution at the price of self doubts and internal crises are compared to those of traditional cultures where differentiation lags behind synthesis, resulting in cultural stability at the price of stagnation. Multi-language, multi-ethnic society might combine the benefits of stability and fast differentiation. Unsolved problems and future theoretical and experimental directions are discussed.
Scalable multi-objective control for large scale water resources systems under uncertainty
NASA Astrophysics Data System (ADS)
Giuliani, Matteo; Quinn, Julianne; Herman, Jonathan; Castelletti, Andrea; Reed, Patrick
2016-04-01
The use of mathematical models to support the optimal management of environmental systems is rapidly expanding over the last years due to advances in scientific knowledge of the natural processes, efficiency of the optimization techniques, and availability of computational resources. However, undergoing changes in climate and society introduce additional challenges for controlling these systems, ultimately motivating the emergence of complex models to explore key causal relationships and dependencies on uncontrolled sources of variability. In this work, we contribute a novel implementation of the evolutionary multi-objective direct policy search (EMODPS) method for controlling environmental systems under uncertainty. The proposed approach combines direct policy search (DPS) with hierarchical parallelization of multi-objective evolutionary algorithms (MOEAs) and offers a threefold advantage: the DPS simulation-based optimization can be combined with any simulation model and does not add any constraint on modeled information, allowing the use of exogenous information in conditioning the decisions. Moreover, the combination of DPS and MOEAs prompts the generation or Pareto approximate set of solutions for up to 10 objectives, thus overcoming the decision biases produced by cognitive myopia, where narrow or restrictive definitions of optimality strongly limit the discovery of decision relevant alternatives. Finally, the use of large-scale MOEAs parallelization improves the ability of the designed solutions in handling the uncertainty due to severe natural variability. The proposed approach is demonstrated on a challenging water resources management problem represented by the optimal control of a network of four multipurpose water reservoirs in the Red River basin (Vietnam). As part of the medium-long term energy and food security national strategy, four large reservoirs have been constructed on the Red River tributaries, which are mainly operated for hydropower production, flood control, and water supply. Numerical results under historical as well as synthetically generated hydrologic conditions show that our approach is able to discover key system tradeoffs in the operations of the system. The ability of the algorithm to find near-optimal solutions increases with the number of islands in the adopted hierarchical parallelization scheme. In addition, although significant performance degradation is observed when the solutions designed over history are re-evaluated over synthetically generated inflows, we successfully reduced these vulnerabilities by identifying alternative solutions that are more robust to hydrologic uncertainties, while also addressing the tradeoffs across the Red River multi-sector services.
Multi-Scale Characterization of Orthotropic Microstructures
2008-04-01
D. Valiveti, S. J. Harris, J. Boileau, A domain partitioning based pre-processor for multi-scale modelling of cast aluminium alloys , Modelling and...SUPPLEMENTARY NOTES Journal article submitted to Modeling and Simulation in Materials Science and Engineering. PAO Case Number: WPAFB 08-3362...element for charac- terization or simulation to avoid misleading predictions of macroscopic defor- mation, fracture, or transport behavior. Likewise
Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture
NASA Astrophysics Data System (ADS)
Hassan, Ezeldin A.
Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.
Modeling of Bulk Evaporation and Condensation
NASA Technical Reports Server (NTRS)
Anghaie, S.; Ding, Z.
1996-01-01
This report describes the modeling and mathematical formulation of the bulk evaporation and condensation involved in liquid-vapor phase change processes. An internal energy formulation, for these phase change processes that occur under the constraint of constant volume, was studied. Compared to the enthalpy formulation, the internal energy formulation has a more concise and compact form. The velocity and time scales of the interface movement were obtained through scaling analysis and verified by performing detailed numerical experiments. The convection effect induced by the density change was analyzed and found to be negligible compared to the conduction effect. Two iterative methods for updating the value of the vapor phase fraction, the energy based (E-based) and temperature based (T-based) methods, were investigated. Numerical experiments revealed that for the evaporation and condensation problems the E-based method is superior to the T-based method in terms of computational efficiency. The internal energy formulation and the E-based method were used to compute the bulk evaporation and condensation processes under different conditions. The evolution of the phase change processes was investigated. This work provided a basis for the modeling of thermal performance of multi-phase nuclear fuel elements under variable gravity conditions, in which the buoyancy convection due to gravity effects and internal heating are involved.
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biros, George
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less
Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam
2016-01-01
Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.
Magrans de Abril, Ildefons; Yoshimoto, Junichiro; Doya, Kenji
2018-06-01
This article presents a review of computational methods for connectivity inference from neural activity data derived from multi-electrode recordings or fluorescence imaging. We first identify biophysical and technical challenges in connectivity inference along the data processing pipeline. We then review connectivity inference methods based on two major mathematical foundations, namely, descriptive model-free approaches and generative model-based approaches. We investigate representative studies in both categories and clarify which challenges have been addressed by which method. We further identify critical open issues and possible research directions. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Ascarrunz, F G; Kisley, M A; Flach, K A; Hamilton, R W; MacGregor, R J
1995-07-01
This paper applies a general mathematical system for characterizing and scaling functional connectivity and information flow across the diffuse (EC) and discrete (DG) input junctions to the CA3 hippocampus. Both gross connectivity and coordinated multiunit informational firing patterns are quantitatively characterized in terms of 32 defining parameters interrelated by 17 equations, and then scaled down according to rules for uniformly proportional scaling and for partial representation. The diffuse EC-CA3 junction is shown to be uniformly scalable with realistic representation of both essential spatiotemporal cooperativity and coordinated firing patterns down to populations of a few hundred neurons. Scaling of the discrete DG-CA3 junction can be effected with a two-step process, which necessarily deviates from uniform proportionality but nonetheless produces a valuable and readily interpretable reduced model, also utilizing a few hundred neurons in the receiving population. Partial representation produces a reduced model of only a portion of the full network where each model neuron corresponds directly to a biological neuron. The mathematical analysis illustrated here shows that although omissions and distortions are inescapable in such an application, satisfactorily complete and accurate models the size of pattern modules are possible. Finally, the mathematical characterization of these junctions generates a theory which sees the DG as a definer of the fine structure of embedded traces in the hippocampus and entire coordinated patterns of sequences of 14-cell links in CA3 as triggered by the firing of sequences of individual neurons in DG.
Enhancing Manufacturing Process Education via Computer Simulation and Visualization
ERIC Educational Resources Information Center
Manohar, Priyadarshan A.; Acharya, Sushil; Wu, Peter
2014-01-01
Industrially significant metal manufacturing processes such as melting, casting, rolling, forging, machining, and forming are multi-stage, complex processes that are labor, time, and capital intensive. Academic research develops mathematical modeling of these processes that provide a theoretical framework for understanding the process variables…
Hazardous waste management system design under population and environmental impact considerations.
Yilmaz, Ozge; Kara, Bahar Y; Yetis, Ulku
2017-12-01
This paper presents a multi objective mixed integer location/routing model that aims to minimize transportation cost and risks for large-scale hazardous waste management systems (HWMSs). Risks induced by hazardous wastes (HWs) on both public and the environment are addressed. For this purpose, a new environmental impact definition is proposed that considers the environmentally vulnerable elements including water bodies, agricultural areas, coastal regions and forestlands located within a certain bandwidth around transportation routes. The solution procedure yields to Pareto optimal curve for two conflicting objectives. The conceptual model developed prior to mathematical formulation addresses waste-to-technology compatibility and HW processing residues to assure applicability of the model to real-life HWMSs. The suggested model was used in a case study targeting HWMS in Turkey. Based on the proposed solution, it was possible to identify not only the transportation routes but also a set of information on HW handling facilities including the types, locations, capacities, and investment/operational cost. The HWMS of this study can be utilized both by public authorities and private sector investors for planning purposes. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Krishnamurthy, Narayanan; Maddali, Siddharth; Romanov, Vyacheslav; Hawk, Jeffrey
We present some structural properties of multi-component steel alloys as predicted by a random forest machine-learning model. These non-parametric models are trained on high-dimensional data sets defined by features such as chemical composition, pre-processing temperatures and environmental influences, the latter of which are based upon standardized testing procedures for tensile, creep and rupture properties as defined by the American Society of Testing and Materials (ASTM). We quantify the goodness of fit of these models as well as the inferred relative importance of each of these features, all with a conveniently defined metric and scale. The models are tested with synthetic data points, generated subject to the appropriate mathematical constraints for the various features. By this we highlight possible trends in the increase or degradation of the structural properties with perturbations in the features of importance. This work is presented as part of the Data Science Initiative at the National Energy Technology Laboratory, directed specifically towards the computational design of steel alloys.
Center for Center for Technology for Advanced Scientific Component Software (TASCS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostadin, Damevski
A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less
Research in Applied Mathematics Related to Mathematical System Theory.
1977-06-01
This report deals with research results obtained in the field of mathematical system theory . Special emphasis was given to the following areas: (1...Linear system theory over a field: parametrization of multi-input, multi-output systems and the geometric structure of classes of systems of...constant dimension. (2) Linear systems over a ring: development of the theory for very general classes of rings. (3) Nonlinear system theory : basic
What We Know About the Brain Structure-Function Relationship.
Batista-García-Ramó, Karla; Fernández-Verdecia, Caridad Ivette
2018-04-18
How the human brain works is still a question, as is its implication with brain architecture: the non-trivial structure–function relationship. The main hypothesis is that the anatomic architecture conditions, but does not determine, the neural network dynamic. The functional connectivity cannot be explained only considering the anatomical substrate. This involves complex and controversial aspects of the neuroscience field and that the methods and methodologies to obtain structural and functional connectivity are not always rigorously applied. The goal of the present article is to discuss about the progress made to elucidate the structure–function relationship of the Central Nervous System, particularly at the brain level, based on results from human and animal studies. The current novel systems and neuroimaging techniques with high resolutive physio-structural capacity have brought about the development of an integral framework of different structural and morphometric tools such as image processing, computational modeling and graph theory. Different laboratories have contributed with in vivo, in vitro and computational/mathematical models to study the intrinsic neural activity patterns based on anatomical connections. We conclude that multi-modal techniques of neuroimaging are required such as an improvement on methodologies for obtaining structural and functional connectivity. Even though simulations of the intrinsic neural activity based on anatomical connectivity can reproduce much of the observed patterns of empirical functional connectivity, future models should be multifactorial to elucidate multi-scale relationships and to infer disorder mechanisms.
NASA Astrophysics Data System (ADS)
Siegert, Stefan
2017-04-01
Initialised climate forecasts on seasonal time scales, run several months or even years ahead, are now an integral part of the battery of products offered by climate services world-wide. The availability of seasonal climate forecasts from various modeling centres gives rise to multi-model ensemble forecasts. Post-processing such seasonal-to-decadal multi-model forecasts is challenging 1) because the cross-correlation structure between multiple models and observations can be complicated, 2) because the amount of training data to fit the post-processing parameters is very limited, and 3) because the forecast skill of numerical models tends to be low on seasonal time scales. In this talk I will review new statistical post-processing frameworks for multi-model ensembles. I will focus particularly on Bayesian hierarchical modelling approaches, which are flexible enough to capture commonly made assumptions about collective and model-specific biases of multi-model ensembles. Despite the advances in statistical methodology, it turns out to be very difficult to out-perform the simplest post-processing method, which just recalibrates the multi-model ensemble mean by linear regression. I will discuss reasons for this, which are closely linked to the specific characteristics of seasonal multi-model forecasts. I explore possible directions for improvements, for example using informative priors on the post-processing parameters, and jointly modelling forecasts and observations.
The oxygen uptake slow component at submaximal intensities in breaststroke swimming
Oliveira, Diogo R.; Gonçalves, Lio F.; Reis, António M.; Fernandes, Ricardo J.; Garrido, Nuno D.
2016-01-01
Abstract The present work proposed to study the oxygen uptake slow component (VO2 SC) of breaststroke swimmers at four different intensities of submaximal exercise, via mathematical modeling of a multi-exponential function. The slow component (SC) was also assessed with two different fixed interval methods and the three methods were compared. Twelve male swimmers performed a test comprising four submaximal 300 m bouts at different intensities where all expired gases were collected breath by breath. Multi-exponential modeling showed values above 450 ml·min−1 of the SC in the two last bouts of exercise (those with intensities above the lactate threshold). A significant effect of the method that was used to calculate the VO2 SC was revealed. Higher mean values were observed when using mathematical modeling compared with the fixed interval 3rd min method (F=7.111; p=0.012; η2=0.587); furthermore, differences were detected among the two fixed interval methods. No significant relationship was found between the SC determined by any method and the blood lactate measured at each of the four exercise intensities. In addition, no significant association between the SC and peak oxygen uptake was found. It was concluded that in trained breaststroke swimmers, the presence of the VO2 SC may be observed at intensities above that corresponding to the 3.5 mM-1 threshold. Moreover, mathematical modeling of the oxygen uptake on-kinetics tended to show a higher slow component as compared to fixed interval methods. PMID:28149379
NASA Astrophysics Data System (ADS)
Huang, Shiquan; Yi, Youping; Li, Pengchuan
2011-05-01
In recent years, multi-scale simulation technique of metal forming is gaining significant attention for prediction of the whole deformation process and microstructure evolution of product. The advances of numerical simulation at macro-scale level on metal forming are remarkable and the commercial FEM software, such as Deform2D/3D, has found a wide application in the fields of metal forming. However, the simulation method of multi-scale has little application due to the non-linearity of microstructure evolution during forming and the difficulty of modeling at the micro-scale level. This work deals with the modeling of microstructure evolution and a new method of multi-scale simulation in forging process. The aviation material 7050 aluminum alloy has been used as example for modeling of microstructure evolution. The corresponding thermal simulated experiment has been performed on Gleeble 1500 machine. The tested specimens have been analyzed for modeling of dislocation density, nucleation and growth of recrystallization(DRX). The source program using cellular automaton (CA) method has been developed to simulate the grain nucleation and growth, in which the change of grain topology structure caused by the metal deformation was considered. The physical fields at macro-scale level such as temperature field, stress and strain fields, which can be obtained by commercial software Deform 3D, are coupled with the deformed storage energy at micro-scale level by dislocation model to realize the multi-scale simulation. This method was explained by forging process simulation of the aircraft wheel hub forging. Coupled the results of Deform 3D with CA results, the forging deformation progress and the microstructure evolution at any point of forging could be simulated. For verifying the efficiency of simulation, experiments of aircraft wheel hub forging have been done in the laboratory and the comparison of simulation and experiment result has been discussed in details.
NASA Astrophysics Data System (ADS)
Dağlarli, Evren; Temeltaş, Hakan
2007-04-01
This paper presents artificial emotional system based autonomous robot control architecture. Hidden Markov model developed as mathematical background for stochastic emotional and behavior transitions. Motivation module of architecture considered as behavioral gain effect generator for achieving multi-objective robot tasks. According to emotional and behavioral state transition probabilities, artificial emotions determine sequences of behaviors. Also motivational gain effects of proposed architecture can be observed on the executing behaviors during simulation.
NASA Astrophysics Data System (ADS)
Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.
2016-12-01
Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.
Stability of distributed MPC in an intersection scenario
NASA Astrophysics Data System (ADS)
Sprodowski, T.; Pannek, J.
2015-11-01
The research topic of autonomous cars and the communication among them has attained much attention in the last years and is developing quickly. Among others, this research area spans fields such as image recognition, mathematical control theory, communication networks, and sensor fusion. We consider an intersection scenario where we divide the shared road space in different cells. These cells form a grid. The cars are modelled as an autonomous multi-agent system based on the Distributed Model Predictive Control algorithm (DMPC). We prove that the overall system reaches stability using Optimal Control for each multi-agent and demonstrate that by numerical results.
The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence
Staebler, Gary M.; Candy, John; Howard, Nathan T.; ...
2016-06-29
The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. Finally, the zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ionscale gyrokinetic simulations.« less
Mathematical and Numerical Techniques in Energy and Environmental Modeling
NASA Astrophysics Data System (ADS)
Chen, Z.; Ewing, R. E.
Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms
Using the Fennema-Sherman Mathematics Attitude Scales with lower-primary teachers
NASA Astrophysics Data System (ADS)
Ren, Lixin; Green, Jennifer L.; Smith, Wendy M.
2016-06-01
The Fennema-Sherman Mathematics Attitude Scales (FSMAS) are among the most popular instruments used in studies of attitudes toward mathematics. However, the FSMAS has been mainly used among student populations and rarely used with teachers. In the present study, three scales from the FSMAS— Confidence, Effectance Motivation, and Anxiety—were revised and used with lower-primary (kindergarten to third grade) teachers. This study includes three parts: (1) a pilot study to ensure the modifications made to the FSMAS were appropriate to use with teachers, (2) confirmatory factor analyses to assess the factor structure of the revised FSMAS with 225 lower-primary teachers, and (3) measurement invariance analyses using data from a similar sample of 171 lower-primary teachers to examine whether the revised FSMAS measures each construct in the same way as in the previous sample. The final three-factor model, after removing three problematic items, achieves acceptable model fit, with each construct meeting all conditions for strict measurement invariance. Additionally, repeated measures analyses were performed on data collected from 39 in-service lower-primary teachers who participated in an elementary mathematics specialist program to examine the use of the revised FSMAS in program evaluation. Overall results suggest that researchers and program evaluators may use the revised FSMAS to reliably measure lower-primary teachers' mathematical attitudes, and it can be a valuable tool for evaluating the effectiveness of professional development programs.
2013-03-01
of coarser-scale materials and structures containing Kevlar fibers (e.g., yarns, fabrics, plies, lamina, and laminates ). Journal of Materials...Multi-Length Scale-Enriched Continuum-Level Material Model for Kevlar -Fiber-Reinforced Polymer-Matrix Composites M. Grujicic, B. Pandurangan, J.S...extensive set of molecular-level computational analyses regarding the role of various microstructural/morphological defects on the Kevlar fiber
Modelling strategies to predict the multi-scale effects of rural land management change
NASA Astrophysics Data System (ADS)
Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.
2011-12-01
Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on speculative relationships, significant predictive power was derived from this approach. Finally, using a formal Bayesian procedure, these different sources of information were combined with local flow data in a catchment-scale conceptual model application , i.e. using small-scale physical properties, regionalised signatures of flow and available flow measurements.
Martin, Natasha K.; Skaathun, Britt; Vickerman, Peter; Stuart, David
2017-01-01
Background People who inject drugs (PWID) and HIV-infected men who have sex with men (MSM) are key risk groups for hepatitis C virus (HCV) transmission. Mathematical modeling studies can help elucidate what level and combination of prevention intervention scale-up is required to control or eliminate epidemics among these key populations. Methods We discuss the evidence surrounding HCV prevention interventions and provide an overview of the mathematical modeling literature projecting the impact of scaled-up HCV prevention among PWID and HIV-infected MSM. Results Harm reduction interventions such as opiate substitution therapy and needle and syringe programs are effective in reducing HCV incidence among PWID. Modeling and limited empirical data indicate HCV treatment could additionally be used for prevention. No studies have evaluated the effectiveness of behavior change interventions to reduce HCV incidence among MSM, but existing interventions to reduce HIV risk could be effective. Mathematical modeling and empirical data indicates that scale-up of harm reduction could reduce HCV transmission, but in isolation is unlikely to eliminate HCV among PWID. By contrast, elimination is possibly achievable through combination scale-up of harm reduction and HCV treatment. Similarly, among HIV-infected MSM, eliminating the emerging epidemics will likely require HCV treatment scale-up in combination with additional interventions to reduce HCV-related risk behaviors. Conclusions Elimination of HCV will likely require combination prevention efforts among both PWID and HIV-infected MSM populations. Further empirical research is required to validate HCV treatment as prevention among these populations, and to identify effective behavioral interventions to reduce HCV incidence among MSM. PMID:28534885
Nonlinear modelling of cancer: bridging the gap between cells and tumours
Lowengrub, J S; Frieboes, H B; Jin, F; Chuang, Y-L; Li, X; Macklin, P; Wise, S M; Cristini, V
2010-01-01
Despite major scientific, medical and technological advances over the last few decades, a cure for cancer remains elusive. The disease initiation is complex, and including initiation and avascular growth, onset of hypoxia and acidosis due to accumulation of cells beyond normal physiological conditions, inducement of angiogenesis from the surrounding vasculature, tumour vascularization and further growth, and invasion of surrounding tissue and metastasis. Although the focus historically has been to study these events through experimental and clinical observations, mathematical modelling and simulation that enable analysis at multiple time and spatial scales have also complemented these efforts. Here, we provide an overview of this multiscale modelling focusing on the growth phase of tumours and bypassing the initial stage of tumourigenesis. While we briefly review discrete modelling, our focus is on the continuum approach. We limit the scope further by considering models of tumour progression that do not distinguish tumour cells by their age. We also do not consider immune system interactions nor do we describe models of therapy. We do discuss hybrid-modelling frameworks, where the tumour tissue is modelled using both discrete (cell-scale) and continuum (tumour-scale) elements, thus connecting the micrometre to the centimetre tumour scale. We review recent examples that incorporate experimental data into model parameters. We show that recent mathematical modelling predicts that transport limitations of cell nutrients, oxygen and growth factors may result in cell death that leads to morphological instability, providing a mechanism for invasion via tumour fingering and fragmentation. These conditions induce selection pressure for cell survivability, and may lead to additional genetic mutations. Mathematical modelling further shows that parameters that control the tumour mass shape also control its ability to invade. Thus, tumour morphology may serve as a predictor of invasiveness and treatment prognosis. PMID:20808719
Nonlinear modelling of cancer: bridging the gap between cells and tumours
NASA Astrophysics Data System (ADS)
Lowengrub, J. S.; Frieboes, H. B.; Jin, F.; Chuang, Y.-L.; Li, X.; Macklin, P.; Wise, S. M.; Cristini, V.
2010-01-01
Despite major scientific, medical and technological advances over the last few decades, a cure for cancer remains elusive. The disease initiation is complex, and including initiation and avascular growth, onset of hypoxia and acidosis due to accumulation of cells beyond normal physiological conditions, inducement of angiogenesis from the surrounding vasculature, tumour vascularization and further growth, and invasion of surrounding tissue and metastasis. Although the focus historically has been to study these events through experimental and clinical observations, mathematical modelling and simulation that enable analysis at multiple time and spatial scales have also complemented these efforts. Here, we provide an overview of this multiscale modelling focusing on the growth phase of tumours and bypassing the initial stage of tumourigenesis. While we briefly review discrete modelling, our focus is on the continuum approach. We limit the scope further by considering models of tumour progression that do not distinguish tumour cells by their age. We also do not consider immune system interactions nor do we describe models of therapy. We do discuss hybrid-modelling frameworks, where the tumour tissue is modelled using both discrete (cell-scale) and continuum (tumour-scale) elements, thus connecting the micrometre to the centimetre tumour scale. We review recent examples that incorporate experimental data into model parameters. We show that recent mathematical modelling predicts that transport limitations of cell nutrients, oxygen and growth factors may result in cell death that leads to morphological instability, providing a mechanism for invasion via tumour fingering and fragmentation. These conditions induce selection pressure for cell survivability, and may lead to additional genetic mutations. Mathematical modelling further shows that parameters that control the tumour mass shape also control its ability to invade. Thus, tumour morphology may serve as a predictor of invasiveness and treatment prognosis.
NASA Astrophysics Data System (ADS)
Cardall, Christian Y.; Budiardja, Reuben D.
2018-01-01
The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.
USDA-ARS?s Scientific Manuscript database
Soil moisture plays an integral role in various aspects ranging from multi-scale hydrologic modeling to agricultural decision analysis to multi-scale hydrologic modeling, from climate change assessments to drought prediction and prevention. The broad availability of soil moisture estimates has only...
Multi-scale modelling of elastic moduli of trabecular bone
Hamed, Elham; Jasiuk, Iwona; Yoo, Andrew; Lee, YikHan; Liszka, Tadeusz
2012-01-01
We model trabecular bone as a nanocomposite material with hierarchical structure and predict its elastic properties at different structural scales. The analysis involves a bottom-up multi-scale approach, starting with nanoscale (mineralized collagen fibril) and moving up the scales to sub-microscale (single lamella), microscale (single trabecula) and mesoscale (trabecular bone) levels. Continuum micromechanics methods, composite materials laminate theory and finite-element methods are used in the analysis. Good agreement is found between theoretical and experimental results. PMID:22279160
Characteristic Energy Scales of Quantum Systems.
ERIC Educational Resources Information Center
Morgan, Michael J.; Jakovidis, Greg
1994-01-01
Provides a particle-in-a-box model to help students understand and estimate the magnitude of the characteristic energy scales of a number of quantum systems. Also discusses the mathematics involved with general computations. (MVL)
NASA Astrophysics Data System (ADS)
Ravi, Sathish Kumar; Gawad, Jerzy; Seefeldt, Marc; Van Bael, Albert; Roose, Dirk
2017-10-01
A numerical multi-scale model is being developed to predict the anisotropic macroscopic material response of multi-phase steel. The embedded microstructure is given by a meso-scale Representative Volume Element (RVE), which holds the most relevant features like phase distribution, grain orientation, morphology etc., in sufficient detail to describe the multi-phase behavior of the material. A Finite Element (FE) mesh of the RVE is constructed using statistical information from individual phases such as grain size distribution and ODF. The material response of the RVE is obtained for selected loading/deformation modes through numerical FE simulations in Abaqus. For the elasto-plastic response of the individual grains, single crystal plasticity based plastic potential functions are proposed as Abaqus material definitions. The plastic potential functions are derived using the Facet method for individual phases in the microstructure at the level of single grains. The proposed method is a new modeling framework and the results presented in terms of macroscopic flow curves are based on the building blocks of the approach, while the model would eventually facilitate the construction of an anisotropic yield locus of the underlying multi-phase microstructure derived from a crystal plasticity based framework.
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
NASA Astrophysics Data System (ADS)
Rout, Bapin Kumar; Brooks, Geoff; Rhamdhani, M. Akbar; Li, Zushu; Schrama, Frank N. H.; Sun, Jianjun
2018-04-01
A multi-zone kinetic model coupled with a dynamic slag generation model was developed for the simulation of hot metal and slag composition during the basic oxygen furnace (BOF) operation. The three reaction zones (i) jet impact zone, (ii) slag-bulk metal zone, (iii) slag-metal-gas emulsion zone were considered for the calculation of overall refining kinetics. In the rate equations, the transient rate parameters were mathematically described as a function of process variables. A micro and macroscopic rate calculation methodology (micro-kinetics and macro-kinetics) were developed to estimate the total refining contributed by the recirculating metal droplets through the slag-metal emulsion zone. The micro-kinetics involves developing the rate equation for individual droplets in the emulsion. The mathematical models for the size distribution of initial droplets, kinetics of simultaneous refining of elements, the residence time in the emulsion, and dynamic interfacial area change were established in the micro-kinetic model. In the macro-kinetics calculation, a droplet generation model was employed and the total amount of refining by emulsion was calculated by summing the refining from the entire population of returning droplets. A dynamic FetO generation model based on oxygen mass balance was developed and coupled with the multi-zone kinetic model. The effect of post-combustion on the evolution of slag and metal composition was investigated. The model was applied to a 200-ton top blowing converter and the simulated value of metal and slag was found to be in good agreement with the measured data. The post-combustion ratio was found to be an important factor in controlling FetO content in the slag and the kinetics of Mn and P in a BOF process.
Goal-oriented robot navigation learning using a multi-scale space representation.
Llofriu, M; Tejera, G; Contreras, M; Pelc, T; Fellous, J M; Weitzenfeld, A
2015-12-01
There has been extensive research in recent years on the multi-scale nature of hippocampal place cells and entorhinal grid cells encoding which led to many speculations on their role in spatial cognition. In this paper we focus on the multi-scale nature of place cells and how they contribute to faster learning during goal-oriented navigation when compared to a spatial cognition system composed of single scale place cells. The task consists of a circular arena with a fixed goal location, in which a robot is trained to find the shortest path to the goal after a number of learning trials. Synaptic connections are modified using a reinforcement learning paradigm adapted to the place cells multi-scale architecture. The model is evaluated in both simulation and physical robots. We find that larger scale and combined multi-scale representations favor goal-oriented navigation task learning. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Herath, Narmada; Del Vecchio, Domitilla
2018-03-01
Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.
Multi-scale computational modeling of developmental biology.
Setty, Yaki
2012-08-01
Normal development of multicellular organisms is regulated by a highly complex process in which a set of precursor cells proliferate, differentiate and move, forming over time a functioning tissue. To handle their complexity, developmental systems can be studied over distinct scales. The dynamics of each scale is determined by the collective activity of entities at the scale below it. I describe a multi-scale computational approach for modeling developmental systems and detail the methodology through a synthetic example of a developmental system that retains key features of real developmental systems. I discuss the simulation of the system as it emerges from cross-scale and intra-scale interactions and describe how an in silico study can be carried out by modifying these interactions in a way that mimics in vivo experiments. I highlight biological features of the results through a comparison with findings in Caenorhabditis elegans germline development and finally discuss about the applications of the approach in real developmental systems and propose future extensions. The source code of the model of the synthetic developmental system can be found in www.wisdom.weizmann.ac.il/~yaki/MultiScaleModel. yaki.setty@gmail.com Supplementary data are available at Bioinformatics online.
A method for testing railway wheel sets on a full-scale roller rig
NASA Astrophysics Data System (ADS)
Liu, Binbin; Bruni, Stefano
2015-09-01
Full-scale roller rigs for tests on a single axle enable the investigation of several dynamics and durability problems related with the design and operation of the railway rolling stock. In order to exploit the best potential of this test equipment, appropriate test procedures need to be defined, particularly in terms of actuators' references, to make sure that meaningful wheel -rail contact conditions can be reproduced. The aim of this paper is to propose a new methodology to define the forces to be generated by the actuators in the rig in order to best reproduce the behaviour of a wheel set and especially the wheel -rail contact forces in a running condition of interest as obtained either from multi-body system (MBS) simulation or from on-track measurements. The method is supported by the use of a mathematical model of the roller rig and uses an iterative correction scheme, comparing the time histories of the contact force components from the roller rig test as predicted by the mathematical model to a set of target contact force time histories. Two methods are introduced, the first one considering a standard arrangement of the roller rig, the second one assuming that a differential gear is introduced in the rig, allowing different rolling speeds of the two rollers. Results are presented showing that the deviation of the roller rig test results from the considered targets can be kept within low tolerances (1% approximately) as far as the vertical and lateral contact forces on both wheels are concerned. For the longitudinal forces, larger deviations are obtained except in the case where a differential gear is introduced.
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
NASA Astrophysics Data System (ADS)
Kou, Jisheng; Sun, Shuyu
2016-08-01
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kou, Jisheng; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049
2016-08-01
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng–Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from themore » microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young–Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young–Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young–Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.« less
BASIN-SCALE ASSESSMENTS FOR SUSTAINABLE ECOSYSTEMS (BASE)
The need for multi-media, multi-stressor, and multi-response models for ecological assessment is widely acknowledged. Assessments at this level of complexity have not been conducted, and therefore pilot assessments are required to identify the critical concepts, models, data, and...
A hybrid prognostic model for multistep ahead prediction of machine condition
NASA Astrophysics Data System (ADS)
Roulias, D.; Loutas, T. H.; Kostopoulos, V.
2012-05-01
Prognostics are the future trend in condition based maintenance. In the current framework a data driven prognostic model is developed. The typical procedure of developing such a model comprises a) the selection of features which correlate well with the gradual degradation of the machine and b) the training of a mathematical tool. In this work the data are taken from a laboratory scale single stage gearbox under multi-sensor monitoring. Tests monitoring the condition of the gear pair from healthy state until total brake down following several days of continuous operation were conducted. After basic pre-processing of the derived data, an indicator that correlated well with the gearbox condition was obtained. Consecutively the time series is split in few distinguishable time regions via an intelligent data clustering scheme. Each operating region is modelled with a feed-forward artificial neural network (FFANN) scheme. The performance of the proposed model is tested by applying the system to predict the machine degradation level on unseen data. The results show the plausibility and effectiveness of the model in following the trend of the timeseries even in the case that a sudden change occurs. Moreover the model shows ability to generalise for application in similar mechanical assets.
2012-08-03
is unlimited. Multi-Length Scale-Enriched Continuum-Level Material Model for Kevlar ®-Fiber-Reinforced Polymer-Matrix Composites The views, opinions...12211 Research Triangle Park, NC 27709-2211 ballistics, composites, Kevlar , material models, microstructural defects REPORT DOCUMENTATION PAGE 11... Kevlar ®-Fiber-Reinforced Polymer-Matrix Composites Report Title Fiber-reinforced polymer matrix composite materials display quite complex deformation
NASA Astrophysics Data System (ADS)
Falco, N.; Pedersen, G. B. M.; Vilmunandardóttir, O. K.; Belart, J. M. M. C.; Sigurmundsson, F. S.; Benediktsson, J. A.
2016-12-01
The project "Environmental Mapping and Monitoring of Iceland by Remote Sensing (EMMIRS)" aims at providing fast and reliable mapping and monitoring techniques on a big spatial scale with a high temporal resolution of the Icelandic landscape. Such mapping and monitoring will be crucial to both mitigate and understand the scale of processes and their often complex interlinked feedback mechanisms.In the EMMIRS project, the Hekla volcano area is one of the main sites under study, where the volcanic eruptions, extreme weather and human activities had an extensive impact on the landscape degradation. The development of innovative remote sensing approaches to compute earth observation variables as automatically as possible is one of the main tasks of the EMMIRS project. Furthermore, a temporal remote sensing archive is created and composed by images acquired by different sensors (Landsat, RapidEye, ASTER and SPOT5). Moreover, historical aerial stereo photos allowed decadal reconstruction of the landscape by reconstruction of digital elevation models. Here, we propose a novel architecture for automatic unsupervised change detection analysis able to ingest multi-source data in order to detect landscape changes in the Hekla area. The change detection analysis is based on multi-scale analysis, which allows the identification of changes at different level of abstraction, from pixel-level to region-level. For this purpose, operators defined in mathematical morphology framework are implemented to model the contextual information, represented by the neighbour system of a pixel, allowing the identification of changes related to both geometrical and spectral domains. Automatic radiometric normalization strategy is also implemented as pre-processing step, aiming at minimizing the effect of different acquisition conditions. The proposed architecture is tested on multi-temporal data sets acquired over different time periods coinciding with the last three eruptions (1980-1981, 1991, 2000) occurred on Hekla volcano. The results reveal emplacement of new lava flows and the initial vegetation succession, providing insightful information on the evolving of vegetation in such environment. Shadow and snow patch changes are resolved in post-processing by exploiting the available spectral information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stinis, Panos
2016-08-07
This is the final report for the work conducted at the University of Minnesota (during the period 12/01/12-09/18/14) by PI Panos Stinis as part of the "Collaboratory on Mathematics for Mesoscopic Modeling of Materials" (CM4). CM4 is a multi-institution DOE-funded project whose aim is to conduct basic and applied research in the emerging field of mesoscopic modeling of materials.
ERIC Educational Resources Information Center
Ekstam, Ulrika; Linnanmäki, Karin; Aunio, Pirjo
2015-01-01
In 2011, there was a legislative reform regarding educational support in Finland, with a focus on early identification, differentiation and flexible arrangement of support using a multi-professional approach, the three-tier support model. The main aim of this study was to investigate what educational support practices are used with low-performing…
Cytoskeletal dynamics in fission yeast: a review of models for polarization and division
Drake, Tyler; Vavylonis, Dimitrios
2010-01-01
We review modeling studies concerning cytoskeletal activity of fission yeast. Recent models vary in length and time scales, describing a range of phenomena from cellular morphogenesis to polymer assembly. The components of cytoskeleton act in concert to mediate cell-scale events and interactions such as polarization. The mathematical models reduce these events and interactions to their essential ingredients, describing the cytoskeleton by its bulk properties. On a smaller scale, models describe cytoskeletal subcomponents and how bulk properties emerge. PMID:21119765
The use of mathematical models to inform influenza pandemic preparedness and response
Wu, Joseph T; Cowling, Benjamin J
2011-01-01
Summary Influenza pandemics have occurred throughout history and were associated with substantial excess mortality and morbidity. Mathematical models of infectious diseases permit quantitative description of epidemic processes based on the underlying biological mechanisms. Mathematical models have been widely used in the past decade to aid pandemic planning by allowing detailed predictions of the speed of spread of an influenza pandemic and the likely effectiveness of alternative control strategies. During the initial waves of the 2009 influenza pandemic, mathematical models were used to track the spread of the virus, predict the time course of the pandemic and assess the likely impact of large-scale vaccination. While mathematical modeling has made substantial contributions to influenza pandemic preparedness, its use as a real-time tool for pandemic control is currently limited by the lack of essential surveillance information such as serologic data. Mathematical modeling provided a useful framework for analyzing and interpreting surveillance data during the 2009 influenza pandemic, for highlighting limitations in existing pandemic surveillance systems, and for guiding how these systems should be strengthened in order to cope with future epidemics of influenza or other emerging infectious diseases. PMID:21727183
Ward, Stanley H.
1989-01-01
Multiple arrays of electric or magnetic transmitters and receivers are used in a borehole geophysical procedure to obtain a multiplicity of redundant data suitable for processing into a resistivity or induced polarization model of a subsurface region of the earth.
Morphological rational multi-scale algorithm for color contrast enhancement
NASA Astrophysics Data System (ADS)
Peregrina-Barreto, Hayde; Terol-Villalobos, Iván R.
2010-01-01
Contrast enhancement main goal consists on improving the image visual appearance but also it is used for providing a transformed image in order to segment it. In mathematical morphology several works have been derived from the framework theory for contrast enhancement proposed by Meyer and Serra. However, when working with images with a wide range of scene brightness, as for example when strong highlights and deep shadows appear in the same image, the proposed morphological methods do not allow the enhancement. In this work, a rational multi-scale method, which uses a class of morphological connected filters called filters by reconstruction, is proposed. Granulometry is used by finding the more accurate scales for filters and with the aim of avoiding the use of other little significant scales. The CIE-u'v'Y' space was used to introduce our results since it takes into account the Weber's Law and by avoiding the creation of new colors it permits to modify the luminance values without affecting the hue. The luminance component ('Y) is enhanced separately using the proposed method, next it is used for enhancing the chromatic components (u', v') by means of the center of gravity law of color mixing.
NASA Astrophysics Data System (ADS)
Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.
2014-12-01
Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.
Quantum algorithm for solving some discrete mathematical problems by probing their energy spectra
NASA Astrophysics Data System (ADS)
Wang, Hefeng; Fan, Heng; Li, Fuli
2014-01-01
When a probe qubit is coupled to a quantum register that represents a physical system, the probe qubit will exhibit a dynamical response only when it is resonant with a transition in the system. Using this principle, we propose a quantum algorithm for solving discrete mathematical problems based on the circuit model. Our algorithm has favorable scaling properties in solving some discrete mathematical problems.
ERIC Educational Resources Information Center
Cornish, Greg; Wines, Robin
The Number Test of the ACER Mathematics Profile Series, contains 30 items, for each of three suggested grade levels: 7-8, 8-9, and 9-10. Raw scores on all tests in the ACER Mathematics Profile Series (Number, Operations, Space and Measurement) are converted to a common scale called MAPS, a major feature of the Series. Based on the Rasch Model,…
Predicting the evolution of large cholera outbreaks: lessons learnt from the Haiti case study
NASA Astrophysics Data System (ADS)
Bertuzzo, Enrico; Mari, Lorenzo; Righetto, Lorenzo; Knox, Allyn; Finger, Flavio; Casagrandi, Renato; Gatto, Marino; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea
2013-04-01
Mathematical models can provide key insights into the course of an ongoing epidemic, potentially aiding real-time emergency management in allocating health care resources and possibly anticipating the impact of alternative interventions. Spatially explicit models of waterborne disease are made routinely possible by widespread data mapping of hydrology, road network, population distribution, and sanitation. Here, we study the ex-post reliability of predictions of the ongoing Haiti cholera outbreak. Our model consists of a set of dynamical equations (SIR-like, i.e. subdivided into the compartments of Susceptible, Infected and Recovered individuals) describing a connected network of human communities where the infection results from the exposure to excess concentrations of pathogens in the water, which are, in turn, driven by hydrologic transport through waterways and by mobility of susceptible and infected individuals. Following the evidence of a clear correlation between rainfall events and cholera resurgence, we test a new mechanism explicitly accounting for rainfall as a driver of enhanced disease transmission by washout of open-air defecation sites or cesspool overflows. A general model for Haitian epidemic cholera and the related uncertainty is thus proposed and applied to the dataset of reported cases now available. The model allows us to draw predictions on longer-term epidemic cholera in Haiti from multi-season Monte Carlo runs, carried out up to January 2014 by using a multivariate Poisson rainfall generator, with parameters varying in space and time. Lessons learned and open issues are discussed and placed in perspective. We conclude that, despite differences in methods that can be tested through model-guided field validation, mathematical modeling of large-scale outbreaks emerges as an essential component of future cholera epidemic control.
An evaluation of the predictive capabilities of CTRW and MRMT
NASA Astrophysics Data System (ADS)
Fiori, Aldo; Zarlenga, Antonio; Gotovac, Hrvoje; Jankovic, Igor; Cvetkovic, Vladimir; Dagan, Gedeon
2016-04-01
The prediction capability of two approximate models of non-Fickian transport in highly heterogeneous aquifers is checked by comparison with accurate numerical simulations, for mean uniform flow of velocity U. The two models considered are the MRMT (Multi Rate Mass Transfer) and CTRW (Continuous Time Random Walk) models. Both circumvent the need to solve the flow and transport equations by using proxy models, which provide the BTC μ(x,t) depending on a vector a of unknown 5 parameters. Although underlain by different conceptualisations, the two models have a similar mathematical structure. The proponents of the models suggest using field transport experiments at a small scale to calibrate a, toward predicting transport at larger scale. The strategy was tested with the aid of accurate numerical simulations in two and three dimensions from the literature. First, the 5 parameter values were calibrated by using the simulated μ at a control plane close to the injection one and subsequently using these same parameters for predicting μ at further 10 control planes. It is found that the two methods perform equally well, though the parameters identification is nonunique, with a large set of parameters providing similar fitting. Also, errors in the determination of the mean eulerian velocity may lead to significant shifts of the predicted BTC. It is found that the simulated BTCs satisfy Markovianity: they can be found as n-fold convolutions of a "kernel", in line with the models' main assumption.
NASA Astrophysics Data System (ADS)
Asano, Masanari; Basieva, Irina; Khrennikov, Andrei; Ohya, Masanori; Tanaka, Yoshiharu; Yamato, Ichiro
2015-10-01
We discuss foundational issues of quantum information biology (QIB)—one of the most successful applications of the quantum formalism outside of physics. QIB provides a multi-scale model of information processing in bio-systems: from proteins and cells to cognitive and social systems. This theory has to be sharply distinguished from "traditional quantum biophysics". The latter is about quantum bio-physical processes, e.g., in cells or brains. QIB models the dynamics of information states of bio-systems. We argue that the information interpretation of quantum mechanics (its various forms were elaborated by Zeilinger and Brukner, Fuchs and Mermin, and D' Ariano) is the most natural interpretation of QIB. Biologically QIB is based on two principles: (a) adaptivity; (b) openness (bio-systems are fundamentally open). These principles are mathematically represented in the framework of a novel formalism— quantum adaptive dynamics which, in particular, contains the standard theory of open quantum systems.
NASA Astrophysics Data System (ADS)
Rodriguez Lucatero, C.; Schaum, A.; Alarcon Ramos, L.; Bernal-Jaquez, R.
2014-07-01
In this study, the dynamics of decisions in complex networks subject to external fields are studied within a Markov process framework using nonlinear dynamical systems theory. A mathematical discrete-time model is derived using a set of basic assumptions regarding the convincement mechanisms associated with two competing opinions. The model is analyzed with respect to the multiplicity of critical points and the stability of extinction states. Sufficient conditions for extinction are derived in terms of the convincement probabilities and the maximum eigenvalues of the associated connectivity matrices. The influences of exogenous (e.g., mass media-based) effects on decision behavior are analyzed qualitatively. The current analysis predicts: (i) the presence of fixed-point multiplicity (with a maximum number of four different fixed points), multi-stability, and sensitivity with respect to the process parameters; and (ii) the bounded but significant impact of exogenous perturbations on the decision behavior. These predictions were verified using a set of numerical simulations based on a scale-free network topology.
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
Characterising the Perceived Value of Mathematics Educational Apps in Preservice Teachers
ERIC Educational Resources Information Center
Handal, Boris; Campbell, Chris; Cavanagh, Michael; Petocz, Peter
2016-01-01
This study validated the semantic items of three related scales aimed at characterising the perceived worth of mathematics-education-related mobile applications (apps). The technological pedagogical content knowledge (TPACK) model was used as the conceptual framework for the analysis. Three hundred and seventy-three preservice students studying…
Zhang, Guoqing; Zhang, Xianku; Pang, Hongshuai
2015-09-01
This research is concerned with the problem of 4 degrees of freedom (DOF) ship manoeuvring identification modelling with the full-scale trial data. To avoid the multi-innovation matrix inversion in the conventional multi-innovation least squares (MILS) algorithm, a new transformed multi-innovation least squares (TMILS) algorithm is first developed by virtue of the coupling identification concept. And much effort is made to guarantee the uniformly ultimate convergence. Furthermore, the auto-constructed TMILS scheme is derived for the ship manoeuvring motion identification by combination with a statistic index. Comparing with the existing results, the proposed scheme has the significant computational advantage and is able to estimate the model structure. The illustrative examples demonstrate the effectiveness of the proposed algorithm, especially including the identification application with full-scale trial data. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Basin-scale hydrogeologic modeling
NASA Astrophysics Data System (ADS)
Person, Mark; Raffensperger, Jeff P.; Ge, Shemin; Garven, Grant
1996-02-01
Mathematical modeling of coupled groundwater flow, heat transfer, and chemical mass transport at the sedimentary basin scale has been increasingly used by Earth scientists studying a wide range of geologic processes including the formation of excess pore pressures, infiltration-driven metamorphism, heat flow anomalies, nuclear waste isolation, hydrothermal ore genesis, sediment diagenesis, basin tectonics, and petroleum generation and migration. These models have provided important insights into the rates and pathways of groundwater migration through basins, the relative importance of different driving mechanisms for fluid flow, and the nature of coupling between the hydraulic, thermal, chemical, and stress regimes. The mathematical descriptions of basin transport processes, the analytical and numerical solution methods employed, and the application of modeling to sedimentary basins around the world are the subject of this review paper. The special considerations made to represent coupled transport processes at the basin scale are emphasized. Future modeling efforts will probably utilize three-dimensional descriptions of transport processes, incorporate greater information regarding natural geological heterogeneity, further explore coupled processes, and involve greater field applications.
NASA Astrophysics Data System (ADS)
Moghaddam, Kamran S.; Usher, John S.
2011-07-01
In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.
NASA Astrophysics Data System (ADS)
Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.
2012-09-01
A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.
NASA Astrophysics Data System (ADS)
Eftimie, Raluca
2015-03-01
One of the main unsolved problems of modern physics is finding a "theory of everything" - a theory that can explain, with the help of mathematics, all physical aspects of the universe. While the laws of physics could explain some aspects of the biology of living systems (e.g., the phenomenological interpretation of movement of cells and animals), there are other aspects specific to biology that cannot be captured by physics models. For example, it is generally accepted that the evolution of a cell-based system is influenced by the activation state of cells (e.g., only activated and functional immune cells can fight diseases); on the other hand, the evolution of an animal-based system can be influenced by the psychological state (e.g., distress) of animals. Therefore, the last 10-20 years have seen also a quest for a "theory of everything"-approach extended to biology, with researchers trying to propose mathematical modelling frameworks that can explain various biological phenomena ranging from ecology to developmental biology and medicine [1,2,6]. The basic idea behind this approach can be found in a few reviews on ecology and cell biology [6,7,9-11], where researchers suggested that due to the parallel between the micro-scale dynamics and the emerging macro-scale phenomena in both cell biology and in ecology, many mathematical methods used for ecological processes could be adapted to cancer modelling [7,9] or to modelling in immunology [11]. However, this approach generally involved the use of different models to describe different biological aspects (e.g., models for cell and animal movement, models for competition between cells or animals, etc.).
Macroscopic balance model for wave rotors
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
1996-01-01
A mathematical model for multi-port wave rotors is described. The wave processes that effect energy exchange within the rotor passage are modeled using one-dimensional gas dynamics. Macroscopic mass and energy balances relate volume-averaged thermodynamic properties in the rotor passage control volume to the mass, momentum, and energy fluxes at the ports. Loss models account for entropy production in boundary layers and in separating flows caused by blade-blockage, incidence, and gradual opening and closing of rotor passages. The mathematical model provides a basis for predicting design-point wave rotor performance, port timing, and machine size. Model predictions are evaluated through comparisons with CFD calculations and three-port wave rotor experimental data. A four-port wave rotor design example is provided to demonstrate model applicability. The modeling approach is amenable to wave rotor optimization studies and rapid assessment of the trade-offs associated with integrating wave rotors into gas turbine engine systems.
Slot, Esther M; van Viersen, Sietske; de Bree, Elise H; Kroesbergen, Evelyn H
2016-01-01
High comorbidity rates have been reported between mathematical learning disabilities (MD) and reading and spelling disabilities (RSD). Research has identified skills related to math, such as number sense (NS) and visuospatial working memory (visuospatial WM), as well as to literacy, such as phonological awareness (PA), rapid automatized naming (RAN) and verbal short-term memory (Verbal STM). In order to explain the high comorbidity rates between MD and RSD, 7-11-year-old children were assessed on a range of cognitive abilities related to literacy (PA, RAN, Verbal STM) and mathematical ability (visuospatial WM, NS). The group of children consisted of typically developing (TD) children (n = 32), children with MD (n = 26), children with RSD (n = 29), and combined MD and RSD (n = 43). It was hypothesized that, in line with the multiple deficit view on learning disorders, at least one unique predictor for both MD and RSD and a possible shared cognitive risk factor would be found to account for the comorbidity between the symptom dimensions literacy and math. Secondly, our hypotheses were that (a) a probabilistic multi-factorial risk factor model would provide a better fit to the data than a deterministic single risk factor model and (b) that a shared risk factor model would provide a better fit than the specific multi-factorial model. All our hypotheses were confirmed. NS and visuospatial WM were identified as unique cognitive predictors for MD, whereas PA and RAN were both associated with RSD. Also, a shared risk factor model with PA as a cognitive predictor for both RSD and MD fitted the data best, indicating that MD and RSD might co-occur due to a shared underlying deficit in phonological processing. Possible explanations are discussed in the context of sample selection and composition. This study shows that different cognitive factors play a role in mathematics and literacy, and that a phonological processing deficit might play a role in the occurrence of MD and RSD.
Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform.
Marshall-Colon, Amy; Long, Stephen P; Allen, Douglas K; Allen, Gabrielle; Beard, Daniel A; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A J; Cox, Donna J; Hart, John C; Hirst, Peter M; Kannan, Kavya; Katz, Daniel S; Lynch, Jonathan P; Millar, Andrew J; Panneerselvam, Balaji; Price, Nathan D; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J; Voit, Eberhard O; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang
2017-01-01
Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop.
Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform
Marshall-Colon, Amy; Long, Stephen P.; Allen, Douglas K.; Allen, Gabrielle; Beard, Daniel A.; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A. J.; Cox, Donna J.; Hart, John C.; Hirst, Peter M.; Kannan, Kavya; Katz, Daniel S.; Lynch, Jonathan P.; Millar, Andrew J.; Panneerselvam, Balaji; Price, Nathan D.; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G.; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J.; Voit, Eberhard O.; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang
2017-01-01
Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop. PMID:28555150
Pore space analysis of NAPL distribution in sand-clay media
Matmon, D.; Hayden, N.J.
2003-01-01
This paper introduces a conceptual model of clays and non-aqueous phase liquids (NAPLs) at the pore scale that has been developed from a mathematical unit cell model, and direct micromodel observation and measurement of clay-containing porous media. The mathematical model uses a unit cell concept with uniform spherical grains for simulating the sand in the sand-clay matrix (???10% clay). Micromodels made with glass slides and including different clay-containing porous media were used to investigate the two clays (kaolinite and montmorillonite) and NAPL distribution within the pore space. The results were used to understand the distribution of NAPL advancing into initially saturated sand and sand-clay media, and provided a detailed analysis of the pore-scale geometry, pore size distribution, NAPL entry pressures, and the effect of clay on this geometry. Interesting NAPL saturation profiles were observed as a result of the complexity of the pore space geometry with the different packing angles and the presence of clays. The unit cell approach has applications for enhancing the mechanistic understanding and conceptualization, both visually and mathematically, of pore-scale processes such as NAPL and clay distribution. ?? 2003 Elsevier Science Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Holburn, E. R.; Bledsoe, B. P.; Poff, N. L.; Cuhaciyan, C. O.
2005-05-01
Using over 300 R/EMAP sites in OR and WA, we examine the relative explanatory power of watershed, valley, and reach scale descriptors in modeling variation in benthic macroinvertebrate indices. Innovative metrics describing flow regime, geomorphic processes, and hydrologic-distance weighted watershed and valley characteristics are used in multiple regression and regression tree modeling to predict EPT richness, % EPT, EPT/C, and % Plecoptera. A nested design using seven ecoregions is employed to evaluate the influence of geographic scale and environmental heterogeneity on the explanatory power of individual and combined scales. Regression tree models are constructed to explain variability while identifying threshold responses and interactions. Cross-validated models demonstrate differences in the explanatory power associated with single-scale and multi-scale models as environmental heterogeneity is varied. Models explaining the greatest variability in biological indices result from multi-scale combinations of physical descriptors. Results also indicate that substantial variation in benthic macroinvertebrate response can be explained with process-based watershed and valley scale metrics derived exclusively from common geospatial data. This study outlines a general framework for identifying key processes driving macroinvertebrate assemblages across a range of scales and establishing the geographic extent at which various levels of physical description best explain biological variability. Such information can guide process-based stratification to avoid spurious comparison of dissimilar stream types in bioassessments and ensure that key environmental gradients are adequately represented in sampling designs.
Galle, J; Hoffmann, M; Aust, G
2009-01-01
Collective phenomena in multi-cellular assemblies can be approached on different levels of complexity. Here, we discuss a number of mathematical models which consider the dynamics of each individual cell, so-called agent-based or individual-based models (IBMs). As a special feature, these models allow to account for intracellular decision processes which are triggered by biomechanical cell-cell or cell-matrix interactions. We discuss their impact on the growth and homeostasis of multi-cellular systems as simulated by lattice-free models. Our results demonstrate that cell polarisation subsequent to cell-cell contact formation can be a source of stability in epithelial monolayers. Stroma contact-dependent regulation of tumour cell proliferation and migration is shown to result in invasion dynamics in accordance with the migrating cancer stem cell hypothesis. However, we demonstrate that different regulation mechanisms can equally well comply with present experimental results. Thus, we suggest a panel of experimental studies for the in-depth validation of the model assumptions.
Blood Flow: Multi-scale Modeling and Visualization (July 2011)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2011-01-01
Multi-scale modeling of arterial blood flow can shed light on the interaction between events happening at micro- and meso-scales (i.e., adhesion of red blood cells to the arterial wall, clot formation) and at macro-scales (i.e., change in flow patterns due to the clot). Coupled numerical simulations of such multi-scale flow require state-of-the-art computers and algorithms, along with techniques for multi-scale visualizations. This animation presents early results of two studies used in the development of a multi-scale visualization methodology. The fisrt illustrates a flow of healthy (red) and diseased (blue) blood cells with a Dissipative Particle Dynamics (DPD) method. Each bloodmore » cell is represented by a mesh, small spheres show a sub-set of particles representing the blood plasma, while instantaneous streamlines and slices represent the ensemble average velocity. In the second we investigate the process of thrombus (blood clot) formation, which may be responsible for the rupture of aneurysms, by concentrating on the platelet blood cells, observing as they aggregate on the wall of an aneruysm. Simulation was performed on Kraken at the National Institute for Computational Sciences. Visualization was produced using resources of the Argonne Leadership Computing Facility at Argonne National Laboratory.« less
Multiscale modeling of brain dynamics: from single neurons and networks to mathematical tools.
Siettos, Constantinos; Starke, Jens
2016-09-01
The extreme complexity of the brain naturally requires mathematical modeling approaches on a large variety of scales; the spectrum ranges from single neuron dynamics over the behavior of groups of neurons to neuronal network activity. Thus, the connection between the microscopic scale (single neuron activity) to macroscopic behavior (emergent behavior of the collective dynamics) and vice versa is a key to understand the brain in its complexity. In this work, we attempt a review of a wide range of approaches, ranging from the modeling of single neuron dynamics to machine learning. The models include biophysical as well as data-driven phenomenological models. The discussed models include Hodgkin-Huxley, FitzHugh-Nagumo, coupled oscillators (Kuramoto oscillators, Rössler oscillators, and the Hindmarsh-Rose neuron), Integrate and Fire, networks of neurons, and neural field equations. In addition to the mathematical models, important mathematical methods in multiscale modeling and reconstruction of the causal connectivity are sketched. The methods include linear and nonlinear tools from statistics, data analysis, and time series analysis up to differential equations, dynamical systems, and bifurcation theory, including Granger causal connectivity analysis, phase synchronization connectivity analysis, principal component analysis (PCA), independent component analysis (ICA), and manifold learning algorithms such as ISOMAP, and diffusion maps and equation-free techniques. WIREs Syst Biol Med 2016, 8:438-458. doi: 10.1002/wsbm.1348 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.
Mathematical analysis techniques for modeling the space network activities
NASA Technical Reports Server (NTRS)
Foster, Lisa M.
1992-01-01
The objective of the present work was to explore and identify mathematical analysis techniques, and in particular, the use of linear programming. This topic was then applied to the Tracking and Data Relay Satellite System (TDRSS) in order to understand the space network better. Finally, a small scale version of the system was modeled, variables were identified, data was gathered, and comparisons were made between actual and theoretical data.
Ward, S.H.
1989-10-17
Multiple arrays of electric or magnetic transmitters and receivers are used in a borehole geophysical procedure to obtain a multiplicity of redundant data suitable for processing into a resistivity or induced polarization model of a subsurface region of the earth. 30 figs.
The Influence of Multi-Scale Stratal Architecture on Multi-Phase Flow
NASA Astrophysics Data System (ADS)
Soltanian, M.; Gershenzon, N. I.; Ritzi, R. W.; Dominic, D.; Ramanathan, R.
2012-12-01
Geological heterogeneity affects flow and transport in porous media, including the migration and entrapment patterns of oil, and efforts for enhanced oil recovery. Such effects are only understood through their relation to a hierarchy of reservoir heterogeneities over a range of scales. Recent work on modern rivers and ancient sediments has led to a conceptual model of the hierarchy of fluvial forms within channel-belts of gravelly braided rivers, and a quantitative model for the corresponding scales of heterogeneity within the stratal architecture (e.g. [Lunt et al (2004) Sedimentology, 51 (3), 377]). In related work, a three-dimensional digital model was developed which represents these scales of fluvial architecture, the associated spatial distribution of permeability, and the connectivity of high-permeability pathways across the different scales of the stratal hierarchy [Ramanathan et al, (2010) Water Resour. Res., 46, W04515; Guin et al, (2010) Water Resour. Res., 46, W04516]. In the present work we numerically examine three-phase fluid flow (water-oil-gas) incorporating the multi-scale model for reservoir heterogeneity spanning the scales from 10^-1 to 10^3 meters. Comparison with results of flow in a reservoir with homogeneous permeability is made showing essentially different flow dynamics.
Sibole, Scott C.; Erdemir, Ahmet
2012-01-01
Cells of the musculoskeletal system are known to respond to mechanical loading and chondrocytes within the cartilage are not an exception. However, understanding how joint level loads relate to cell level deformations, e.g. in the cartilage, is not a straightforward task. In this study, a multi-scale analysis pipeline was implemented to post-process the results of a macro-scale finite element (FE) tibiofemoral joint model to provide joint mechanics based displacement boundary conditions to micro-scale cellular FE models of the cartilage, for the purpose of characterizing chondrocyte deformations in relation to tibiofemoral joint loading. It was possible to identify the load distribution within the knee among its tissue structures and ultimately within the cartilage among its extracellular matrix, pericellular environment and resident chondrocytes. Various cellular deformation metrics (aspect ratio change, volumetric strain, cellular effective strain and maximum shear strain) were calculated. To illustrate further utility of this multi-scale modeling pipeline, two micro-scale cartilage constructs were considered: an idealized single cell at the centroid of a 100×100×100 μm block commonly used in past research studies, and an anatomically based (11 cell model of the same volume) representation of the middle zone of tibiofemoral cartilage. In both cases, chondrocytes experienced amplified deformations compared to those at the macro-scale, predicted by simulating one body weight compressive loading on the tibiofemoral joint. In the 11 cell case, all cells experienced less deformation than the single cell case, and also exhibited a larger variance in deformation compared to other cells residing in the same block. The coupling method proved to be highly scalable due to micro-scale model independence that allowed for exploitation of distributed memory computing architecture. The method’s generalized nature also allows for substitution of any macro-scale and/or micro-scale model providing application for other multi-scale continuum mechanics problems. PMID:22649535
Cerda, Gamal; Pérez, Carlos; Navarro, José I; Aguilar, Manuel; Casas, José A; Aragón, Estíbaliz
2015-01-01
This study tested a structural model of cognitive-emotional explanatory variables to explain performance in mathematics. The predictor variables assessed were related to students' level of development of early mathematical competencies (EMCs), specifically, relational and numerical competencies, predisposition toward mathematics, and the level of logical intelligence in a population of primary school Chilean students (n = 634). This longitudinal study also included the academic performance of the students during a period of 4 years as a variable. The sampled students were initially assessed by means of an Early Numeracy Test, and, subsequently, they were administered a Likert-type scale to measure their predisposition toward mathematics (EPMAT) and a basic test of logical intelligence. The results of these tests were used to analyse the interaction of all the aforementioned variables by means of a structural equations model. This combined interaction model was able to predict 64.3% of the variability of observed performance. Preschool students' performance in EMCs was a strong predictor for achievement in mathematics for students between 8 and 11 years of age. Therefore, this paper highlights the importance of EMCs and the modulating role of predisposition toward mathematics. Also, this paper discusses the educational role of these findings, as well as possible ways to improve negative predispositions toward mathematical tasks in the school domain.
UTCI-Fiala multi-node model of human heat transfer and temperature regulation
NASA Astrophysics Data System (ADS)
Fiala, Dusan; Havenith, George; Bröde, Peter; Kampmann, Bernhard; Jendritzky, Gerd
2012-05-01
The UTCI-Fiala mathematical model of human temperature regulation forms the basis of the new Universal Thermal Climate Index (UTC). Following extensive validation tests, adaptations and extensions, such as the inclusion of an adaptive clothing model, the model was used to predict human temperature and regulatory responses for combinations of the prevailing outdoor climate conditions. This paper provides an overview of the underlying algorithms and methods that constitute the multi-node dynamic UTCI-Fiala model of human thermal physiology and comfort. Treated topics include modelling heat and mass transfer within the body, numerical techniques, modelling environmental heat exchanges, thermoregulatory reactions of the central nervous system, and perceptual responses. Other contributions of this special issue describe the validation of the UTCI-Fiala model against measured data and the development of the adaptive clothing model for outdoor climates.
A Scale for Measuring Teachers' Mathematics-Related Beliefs: A Validity and Reliability Study
ERIC Educational Resources Information Center
Purnomo,Yoppy Wahyu
2017-01-01
The purpose of this study was to develop and validate a scale of teacher beliefs related to mathematics, namely, beliefs about the nature of mathematics, mathematics teaching, and assessment in mathematics learning. A scale development study was used to achieve it. The draft scale consisted of 54 items in which 16 items related to beliefs about…
[Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].
Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang
2011-12-01
To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.
Rejniak, Katarzyna A.; Gerlee, Philip
2013-01-01
Summary In this review we summarize our recent efforts using mathematical modeling and computation to simulate cancer invasion, with a special emphasis on the tumor microenvironment. We consider cancer progression as a complex multiscale process and approach it with three single-cell based mathematical models that examine the interactions between tumor microenvironment and cancer cells at several scales. The models exploit distinct mathematical and computational techniques, yet they share core elements and can be compared and/or related to each other. The overall aim of using mathematical models is to uncover the fundamental mechanisms that lend cancer progression its direction towards invasion and metastasis. The models effectively simulate various modes of cancer cell adaptation to the microenvironment in a growing tumor. All three point to a general mechanism underlying cancer invasion: competition for adaptation between distinct cancer cell phenotypes, driven by a tumor microenvironment with scarce resources. These theoretical predictions pose an intriguing experimental challenge: test the hypothesis that invasion is an emergent property of cancer cell populations adapting to selective microenvironment pressure, rather than culmination of cancer progression producing cells with the “invasive phenotype”. In broader terms, we propose that fundamental insights into cancer can be achieved by experimentation interacting with theoretical frameworks provided by computational and mathematical modeling. PMID:18524624
ERIC Educational Resources Information Center
Clements, Douglas H.; Sarama, Julie; Wolfe, Christopher B.; Spitler, Mary Elaine
2013-01-01
Using a cluster randomized trial design, we evaluated the persistence of effects of a research-based model for scaling up educational interventions. The model was implemented in 42 schools in two city districts serving low-resource communities, randomly assigned to three conditions. In pre-kindergarten, the two experimental interventions were…
Multi-scale Material Appearance
NASA Astrophysics Data System (ADS)
Wu, Hongzhi
Modeling and rendering the appearance of materials is important for a diverse range of applications of computer graphics - from automobile design to movies and cultural heritage. The appearance of materials varies considerably at different scales, posing significant challenges due to the sheer complexity of the data, as well the need to maintain inter-scale consistency constraints. This thesis presents a series of studies around the modeling, rendering and editing of multi-scale material appearance. To efficiently render material appearance at multiple scales, we develop an object-space precomputed adaptive sampling method, which precomputes a hierarchy of view-independent points that preserve multi-level appearance. To support bi-scale material appearance design, we propose a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of Bidirectional Visible Normal Distribution Functions and pre-rotated Bidirectional Reflectance Distribution Functions in the matrix formulation of the rendering algorithm. This approach can guide the physical realization of appearance, as well as the modeling of real-world materials using very sparse measurements. Finally, we present a bi-scale-inspired high-quality general representation for material appearance described by Bidirectional Texture Functions. Our representation is at once compact, easily editable, and amenable to efficient rendering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lienert, Matthias, E-mail: lienert@math.lmu.de
2015-04-15
The question how to Lorentz transform an N-particle wave function naturally leads to the concept of a so-called multi-time wave function, i.e., a map from (space-time){sup N} to a spin space. This concept was originally proposed by Dirac as the basis of relativistic quantum mechanics. In such a view, interaction potentials are mathematically inconsistent. This fact motivates the search for new mechanisms for relativistic interactions. In this paper, we explore the idea that relativistic interaction can be described by boundary conditions on the set of coincidence points of two particles in space-time. This extends ideas from zero-range physics to amore » relativistic setting. We illustrate the idea at the simplest model which still possesses essential physical properties like Lorentz invariance and a positive definite density: two-time equations for massless Dirac particles in 1 + 1 dimensions. In order to deal with a spatio-temporally non-trivial domain, a necessity in the multi-time picture, we develop a new method to prove existence and uniqueness of classical solutions: a generalized version of the method of characteristics. Both mathematical and physical considerations are combined to precisely formulate and answer the questions of probability conservation, Lorentz invariance, interaction, and antisymmetry.« less
A genetic algorithm for a bi-objective mathematical model for dynamic virtual cell formation problem
NASA Astrophysics Data System (ADS)
Moradgholi, Mostafa; Paydar, Mohammad Mahdi; Mahdavi, Iraj; Jouzdani, Javid
2016-09-01
Nowadays, with the increasing pressure of the competitive business environment and demand for diverse products, manufacturers are force to seek for solutions that reduce production costs and rise product quality. Cellular manufacturing system (CMS), as a means to this end, has been a point of attraction to both researchers and practitioners. Limitations of cell formation problem (CFP), as one of important topics in CMS, have led to the introduction of virtual CMS (VCMS). This research addresses a bi-objective dynamic virtual cell formation problem (DVCFP) with the objective of finding the optimal formation of cells, considering the material handling costs, fixed machine installation costs and variable production costs of machines and workforce. Furthermore, we consider different skills on different machines in workforce assignment in a multi-period planning horizon. The bi-objective model is transformed to a single-objective fuzzy goal programming model and to show its performance; numerical examples are solved using the LINGO software. In addition, genetic algorithm (GA) is customized to tackle large-scale instances of the problems to show the performance of the solution method.
NASA Technical Reports Server (NTRS)
Parker, Peter A. (Inventor)
2003-01-01
A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.
Test-and-treat approach to HIV/AIDS: a primer for mathematical modeling.
Nah, Kyeongah; Nishiura, Hiroshi; Tsuchiya, Naho; Sun, Xiaodan; Asai, Yusuke; Imamura, Akifumi
2017-09-05
The public benefit of test-and-treat has induced a need to justify goodness for the public, and mathematical modeling studies have played a key role in designing and evaluating the test-and-treat strategy for controlling HIV/AIDS. Here we briefly and comprehensively review the essence of contemporary understanding of the test-and-treat policy through mathematical modeling approaches and identify key pitfalls that have been identified to date. While the decrease in HIV incidence is achieved with certain coverages of diagnosis, care and continued treatment, HIV prevalence is not necessarily decreased and sometimes the test-and-treat is accompanied by increased long-term cost of antiretroviral therapy (ART). To confront with the complexity of assessment on this policy, the elimination threshold or the effective reproduction number has been proposed for its use in determining the overall success to anticipate the eventual elimination. Since the publication of original model in 2009, key issues of test-and-treat modeling studies have been identified, including theoretical problems surrounding the sexual partnership network, heterogeneities in the transmission dynamics, and realistic issues of achieving and maintaining high treatment coverage in the most hard-to-reach populations. To explicitly design country-specific control policy, quantitative modeling approaches to each single setting with differing epidemiological context would require multi-disciplinary collaborations among clinicians, public health practitioners, laboratory technologists, epidemiologists and mathematical modelers.
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.
NASA Astrophysics Data System (ADS)
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.
Retardation in Mathematics: A Consideration of Multi-Factorial Determination
ERIC Educational Resources Information Center
Lansdown, Richard
1978-01-01
Discusses mathematical retardation as a construct and examines the possible contributions of emotional factors, socioeconomic factors, poor teaching, cognitive factors, and sex difference to low achievement in mathematics. (JB)
Areepattamannil, Shaljan; Abdelfattah, Faisal; Mahasneh, Randa Ali; Khine, Myint Swe; Welch, Anita G; Melkonian, Michael; Al Nuaimi, Samira Ahmed
2016-01-01
Over half-a-million adolescents take part in each cycle of the Program for International Student Assessment (PISA). Yet often, researchers and policy makers across the globe tend to focus their attention primarily on the academic trajectories of adolescents hailing from highly successful education systems. Hence, a vast majority of the adolescent population who regionally and globally constitute the 'long tail of underachievement' often remain unnoticed and underrepresented in the growing literature on adolescents' academic trajectories. The present study, therefore, explored the relations of dispositions toward mathematics, subjective norms in mathematics, and perceived control of success in mathematics to mathematics work ethic as well as mathematics performance; and the mediational role of mathematics work ethic in the association between dispositional, normative, and control beliefs and mathematics performance among adolescents in one of the lowest performing education systems, Qatar. Structural equation modeling (SEM) analyses revealed that Qatari adolescents' dispositional, normative, and control beliefs about mathematics were significantly associated with their mathematics work ethic and mathematics performance, and mathematics work ethic significantly mediated the relationship between dispositional, normative, and control beliefs about mathematics and mathematics performance. However, multi-group SEM analyses indicated that these relationships were not invariant across the gender and the SES groups. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Non-null annular subaperture stitching interferometry for aspheric test
NASA Astrophysics Data System (ADS)
Zhang, Lei; Liu, Dong; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian
2015-10-01
A non-null annular subaperture stitching interferometry (NASSI), combining the subaperture stitching idea and non-null test method, is proposed for steep aspheric testing. Compared with standard annular subaperture stitching interferometry (ASSI), a partial null lens (PNL) is employed as an alternative to the transmission sphere, to generate different aspherical wavefronts as the references. The coverage subaperture number would thus be reduced greatly for the better performance of aspherical wavefronts in matching the local slope of aspheric surfaces. Instead of various mathematical stitching algorithms, a simultaneous reverse optimizing reconstruction (SROR) method based on system modeling and ray tracing is proposed for full aperture figure error reconstruction. All the subaperture measurements are simulated simultaneously with a multi-configuration model in a ray-tracing program, including the interferometric system modeling and subaperture misalignments modeling. With the multi-configuration model, full aperture figure error would be extracted in form of Zernike polynomials from subapertures wavefront data by the SROR method. This method concurrently accomplishes subaperture retrace error and misalignment correction, requiring neither complex mathematical algorithms nor subaperture overlaps. A numerical simulation exhibits the comparison of the performance of the NASSI and standard ASSI, which demonstrates the high accuracy of the NASSI in testing steep aspheric. Experimental results of NASSI are shown to be in good agreement with that of Zygo® VerifireTM Asphere interferometer.
Pargett, Michael; Rundell, Ann E.; Buzzard, Gregery T.; Umulis, David M.
2014-01-01
Discovery in developmental biology is often driven by intuition that relies on the integration of multiple types of data such as fluorescent images, phenotypes, and the outcomes of biochemical assays. Mathematical modeling helps elucidate the biological mechanisms at play as the networks become increasingly large and complex. However, the available data is frequently under-utilized due to incompatibility with quantitative model tuning techniques. This is the case for stem cell regulation mechanisms explored in the Drosophila germarium through fluorescent immunohistochemistry. To enable better integration of biological data with modeling in this and similar situations, we have developed a general parameter estimation process to quantitatively optimize models with qualitative data. The process employs a modified version of the Optimal Scaling method from social and behavioral sciences, and multi-objective optimization to evaluate the trade-off between fitting different datasets (e.g. wild type vs. mutant). Using only published imaging data in the germarium, we first evaluated support for a published intracellular regulatory network by considering alternative connections of the same regulatory players. Simply screening networks against wild type data identified hundreds of feasible alternatives. Of these, five parsimonious variants were found and compared by multi-objective analysis including mutant data and dynamic constraints. With these data, the current model is supported over the alternatives, but support for a biochemically observed feedback element is weak (i.e. these data do not measure the feedback effect well). When also comparing new hypothetical models, the available data do not discriminate. To begin addressing the limitations in data, we performed a model-based experiment design and provide recommendations for experiments to refine model parameters and discriminate increasingly complex hypotheses. PMID:24626201
Sriyudthsak, Kansuporn; Shiraishi, Fumihide; Hirai, Masami Yokota
2016-01-01
The high-throughput acquisition of metabolome data is greatly anticipated for the complete understanding of cellular metabolism in living organisms. A variety of analytical technologies have been developed to acquire large-scale metabolic profiles under different biological or environmental conditions. Time series data are useful for predicting the most likely metabolic pathways because they provide important information regarding the accumulation of metabolites, which implies causal relationships in the metabolic reaction network. Considerable effort has been undertaken to utilize these data for constructing a mathematical model merging system properties and quantitatively characterizing a whole metabolic system in toto. However, there are technical difficulties between benchmarking the provision and utilization of data. Although, hundreds of metabolites can be measured, which provide information on the metabolic reaction system, simultaneous measurement of thousands of metabolites is still challenging. In addition, it is nontrivial to logically predict the dynamic behaviors of unmeasurable metabolite concentrations without sufficient information on the metabolic reaction network. Yet, consolidating the advantages of advancements in both metabolomics and mathematical modeling remain to be accomplished. This review outlines the conceptual basis of and recent advances in technologies in both the research fields. It also highlights the potential for constructing a large-scale mathematical model by estimating model parameters from time series metabolome data in order to comprehensively understand metabolism at the systems level.
Simulating Microbial Community Patterning Using Biocellion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Seung-Hwa; Kahan, Simon H.; Momeni, Babak
2014-04-17
Mathematical modeling and computer simulation are important tools for understanding complex interactions between cells and their biotic and abiotic environment: similarities and differences between modeled and observed behavior provide the basis for hypothesis forma- tion. Momeni et al. [5] investigated pattern formation in communities of yeast strains engaging in different types of ecological interactions, comparing the predictions of mathematical modeling and simulation to actual patterns observed in wet-lab experiments. However, simu- lations of millions of cells in a three-dimensional community are ex- tremely time-consuming. One simulation run in MATLAB may take a week or longer, inhibiting exploration of the vastmore » space of parameter combinations and assumptions. Improving the speed, scale, and accu- racy of such simulations facilitates hypothesis formation and expedites discovery. Biocellion is a high performance software framework for ac- celerating discrete agent-based simulation of biological systems with millions to trillions of cells. Simulations of comparable scale and accu- racy to those taking a week of computer time using MATLAB require just hours using Biocellion on a multicore workstation. Biocellion fur- ther accelerates large scale, high resolution simulations using cluster computers by partitioning the work to run on multiple compute nodes. Biocellion targets computational biologists who have mathematical modeling backgrounds and basic C++ programming skills. This chap- ter describes the necessary steps to adapt the original Momeni et al.'s model to the Biocellion framework as a case study.« less
Robert S. Arkle; David S. Pilliod; Steven E. Hanser; Matthew L. Brooks; Jeanne C. Chambers; James B. Grace; Kevin C. Knutson; David A. Pyke; Justin L. Welty; Troy A. Wirth
2014-01-01
A recurrent challenge in the conservation of wide-ranging, imperiled species is understanding which habitats to protect and whether we are capable of restoring degraded landscapes. For Greater Sage-grouse (Centrocercus urophasianus), a species of conservation concern in the western United States, we approached this problem by developing multi-scale empirical models of...
Zhang, Yanhang; Barocas, Victor H; Berceli, Scott A; Clancy, Colleen E; Eckmann, David M; Garbey, Marc; Kassab, Ghassan S; Lochner, Donna R; McCulloch, Andrew D; Tran-Son-Tay, Roger; Trayanova, Natalia A
2016-09-01
Cardiovascular diseases (CVDs) are the leading cause of death in the western world. With the current development of clinical diagnostics to more accurately measure the extent and specifics of CVDs, a laudable goal is a better understanding of the structure-function relation in the cardiovascular system. Much of this fundamental understanding comes from the development and study of models that integrate biology, medicine, imaging, and biomechanics. Information from these models provides guidance for developing diagnostics, and implementation of these diagnostics to the clinical setting, in turn, provides data for refining the models. In this review, we introduce multi-scale and multi-physical models for understanding disease development, progression, and designing clinical interventions. We begin with multi-scale models of cardiac electrophysiology and mechanics for diagnosis, clinical decision support, personalized and precision medicine in cardiology with examples in arrhythmia and heart failure. We then introduce computational models of vasculature mechanics and associated mechanical forces for understanding vascular disease progression, designing clinical interventions, and elucidating mechanisms that underlie diverse vascular conditions. We conclude with a discussion of barriers that must be overcome to provide enhanced insights, predictions, and decisions in pre-clinical and clinical applications.
Zhang, Yanhang; Barocas, Victor H.; Berceli, Scott A.; Clancy, Colleen E.; Eckmann, David M.; Garbey, Marc; Kassab, Ghassan S.; Lochner, Donna R.; McCulloch, Andrew D.; Tran-Son-Tay, Roger; Trayanova, Natalia A.
2016-01-01
Cardiovascular diseases (CVDs) are the leading cause of death in the western world. With the current development of clinical diagnostics to more accurately measure the extent and specifics of CVDs, a laudable goal is a better understanding of the structure-function relation in the cardiovascular system. Much of this fundamental understanding comes from the development and study of models that integrate biology, medicine, imaging, and biomechanics. Information from these models provides guidance for developing diagnostics, and implementation of these diagnostics to the clinical setting, in turn, provides data for refining the models. In this review, we introduce multi-scale and multi-physical models for understanding disease development, progression, and designing clinical interventions. We begin with multi-scale models of cardiac electrophysiology and mechanics for diagnosis, clinical decision support, personalized and precision medicine in cardiology with examples in arrhythmia and heart failure. We then introduce computational models of vasculature mechanics and associated mechanical forces for understanding vascular disease progression, designing clinical interventions, and elucidating mechanisms that underlie diverse vascular conditions. We conclude with a discussion of barriers that must be overcome to provide enhanced insights, predictions, and decisions in pre-clinical and clinical applications. PMID:27138523
NASA Astrophysics Data System (ADS)
Jain, Anuj Kumar; Rastogi, Vikas; Agrawal, Atul Kumar
2018-01-01
The main focus of this paper is to study effects of asymmetric stiffness on parametric instabilities of multi-rotor-system through extended Lagrangian formalism, where symmetries are broken in terms of the rotor stiffness. The complete insight of dynamic behaviour of multi-rotor-system with asymmetries is evaluated through extension of Lagrangian equation with a case study. In this work, a dynamic mathematical model of a multi-rotor-system through a novel approach of extension of Lagrangian mechanics is developed, where the system is having asymmetries due to varying stiffness. The amplitude and the natural frequency of the rotor are obtained analytically through the proposed methodology. The bond graph modeling technique is used for modeling the asymmetric rotor. Symbol-shakti® software is used for the simulation of the model. The effects of the stiffness of multi-rotor-system on amplitude and frequencies are studied using numerical simulation. Simulation results show a considerable agreement with the theoretical results obtained through extended Lagrangian formalism. It is further shown that amplitude of the rotor increases inversely the stiffness of the rotor up to a certain limit, which is also affirmed theoretically.
NASA Astrophysics Data System (ADS)
Widowati; Putro, S. P.; Silfiana
2018-05-01
Integrated Multi-Trophic Aquaculture (IMTA) is a polyculture with several biotas maintained in it to optimize waste recycling as a food source. The interaction between phytoplankton and nitrogen as waste in fish cultivation including ammonia, nitrite, and nitrate studied in the form of mathematical models. The form model is non-linear systems of differential equations with the four variables. The analytical analysis was used to study the dynamic behavior of this model. Local stability analysis is performed at the equilibrium point with the first step linearized model by using Taylor series, then determined the Jacobian matrix. If all eigenvalues have negative real parts, then the equilibrium of the system is locally asymptotic stable. Some numerical simulations were also demonstrated to verify our analytical result.
Singh, Brajesh K; Srivastava, Vineet K
2015-04-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.
Singh, Brajesh K.; Srivastava, Vineet K.
2015-01-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639
A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation
NASA Astrophysics Data System (ADS)
Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.
2016-12-01
Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed
2016-12-01
Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.
Fuzzy Edge Connectivity of Graphical Fuzzy State Space Model in Multi-connected System
NASA Astrophysics Data System (ADS)
Harish, Noor Ainy; Ismail, Razidah; Ahmad, Tahir
2010-11-01
Structured networks of interacting components illustrate complex structure in a direct or intuitive way. Graph theory provides a mathematical modeling for studying interconnection among elements in natural and man-made systems. On the other hand, directed graph is useful to define and interpret the interconnection structure underlying the dynamics of the interacting subsystem. Fuzzy theory provides important tools in dealing various aspects of complexity, imprecision and fuzziness of the network structure of a multi-connected system. Initial development for systems of Fuzzy State Space Model (FSSM) and a fuzzy algorithm approach were introduced with the purpose of solving the inverse problems in multivariable system. In this paper, fuzzy algorithm is adapted in order to determine the fuzzy edge connectivity between subsystems, in particular interconnected system of Graphical Representation of FSSM. This new approach will simplify the schematic diagram of interconnection of subsystems in a multi-connected system.
NASA Astrophysics Data System (ADS)
Yao, Jun; Zhang, Jinqiu; Zhao, Mingmei; Li, Xin
2018-07-01
This study investigated the stability of vibration in a nonlinear suspension system with slow-varying sprung mass under dual-excitation. A mathematical model of the system was first established and then solved using the multi-scale method. Finally, the amplitude-frequency curve of vehicle vibration, the solution's stable region and time-domain curve in Hopf bifurcation were derived. The obtained results revealed that an increase in the lower excitation would reduce the system's stability while an increase in the upper excitation can make the system more stable. The slow-varying sprung mass will change the system's damping from negative to positive, leading to the appearance of limit cycle and Hopf bifurcation. As a result, the vehicle's vibration state is forced to change. The stability of this system is extremely fragile under the effect of dynamic Hopf bifurcation as well as static bifurcation.
NASA Astrophysics Data System (ADS)
Timchenko, Leonid; Yarovyi, Andrii; Kokriatskaya, Nataliya; Nakonechna, Svitlana; Abramenko, Ludmila; Ławicki, Tomasz; Popiel, Piotr; Yesmakhanova, Laura
2016-09-01
The paper presents a method of parallel-hierarchical transformations for rapid recognition of dynamic images using GPU technology. Direct parallel-hierarchical transformations based on cluster CPU-and GPU-oriented hardware platform. Mathematic models of training of the parallel hierarchical (PH) network for the transformation are developed, as well as a training method of the PH network for recognition of dynamic images. This research is most topical for problems on organizing high-performance computations of super large arrays of information designed to implement multi-stage sensing and processing as well as compaction and recognition of data in the informational structures and computer devices. This method has such advantages as high performance through the use of recent advances in parallelization, possibility to work with images of ultra dimension, ease of scaling in case of changing the number of nodes in the cluster, auto scan of local network to detect compute nodes.
Typograph: Multiscale Spatial Exploration of Text Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Burtner, Edwin R.; Cramer, Nicholas O.
2013-12-01
Visualizing large document collections using a spatial layout of terms can enable quick overviews of information. However, these metaphors (e.g., word clouds, tag clouds, etc.) often lack interactivity to explore the information and the location and rendering of the terms are often not based on mathematical models that maintain relative distances from other information based on similarity metrics. Further, transitioning between levels of detail (i.e., from terms to full documents) can be challanging. In this paper, we present Typograph, a multi-scale spatial exploration visualization for large document collections. Based on the term-based visualization methods, Typograh enables multipel levels of detailmore » (terms, phrases, snippets, and full documents) within the single spatialization. Further, the information is placed based on their relative similarity to other information to create the “near = similar” geography metaphor. This paper discusses the design principles and functionality of Typograph and presents a use case analyzing Wikipedia to demonstrate usage.« less
Biodegradation of PAHs and PCBs in soils and sludges
Liu, L.; Tindall, J.A.; Friedel, M.J.
2007-01-01
Results from a multi-year, pilot-scale land treatment project for PAHs and PCBs biodegradation were evaluated. A mathematical model, capable of describing sorption, sequestration, and biodegradation in soil/water systems, is applied to interpret the efficacy of a sequential active-passive biotreatment process of organic chemicals on remediation sites. To account for the recalcitrance of PAHs and PCBs in soils and sludges during long-term biotreatment, this model comprises a kinetic equation for organic chemical intraparticle sequestration process. Model responses were verified by comparison to measurements of biodegradation of PAHs and PCBs in land treatment units; a favorable match was found between them. Model simulations were performed to predict on-going biodegradation behavior of PAHs and PCBs in land treatment units. Simulation results indicate that complete biostabilization will be achieved when the concentration of reversibly sorbed chemical (S RA) reduces to undetectable levels, with a certain amount of irreversibly sequestrated residual chemical (S IA) remaining within the soil particle solid phase. The residual fraction (S IA) tends to lose its original chemical and biological activity, and hence, is much less available, toxic, and mobile than the "free" compounds. Therefore, little or no PAHs and PCBs will leach from the treatment site and constitutes no threat to human health or the environment. Biotreatment of PAHs and PCBs can be terminated accordingly. Results from the pilot-scale testing data and model calculations also suggest that a significant fraction (10-30%) of high-molecular-weight PAHs and PCBs could be sequestrated and become unavailable for biodegradation. Bioavailability (large K d , i.e., slow desorption rate) is the key factor limiting the PAHs degradation. However, both bioavailability and bioactivity (K in Monod kinetics, i.e., number of microbes, nutrients, and electron acceptor, etc.) regulate PCBs biodegradation. The sequential active-passive biotreatment can be a cost-effective approach for remediation of highly hydrophobic organic contaminants. The mathematical model proposed here would be useful in the design and operation of such organic chemical biodegradation processes on remediation sites. ?? 2007 Springer Science+Business Media B.V.
Guo, Dongmin; Li, King C; Peters, Timothy R; Snively, Beverly M; Poehling, Katherine A; Zhou, Xiaobo
2015-03-11
Mathematical modeling of influenza epidemic is important for analyzing the main cause of the epidemic and finding effective interventions towards it. The epidemic is a dynamic process. In this process, daily infections are caused by people's contacts, and the frequency of contacts can be mainly influenced by their cognition to the disease. The cognition is in turn influenced by daily illness attack rate, climate, and other environment factors. Few existing methods considered the dynamic process in their models. Therefore, their prediction results can hardly be explained by the mechanisms of epidemic spreading. In this paper, we developed a heterogeneous graph modeling approach (HGM) to describe the dynamic process of influenza virus transmission by taking advantage of our unique clinical data. We built social network of studied region and embedded an Agent-Based Model (ABM) in the HGM to describe the dynamic change of an epidemic. Our simulations have a good agreement with clinical data. Parameter sensitivity analysis showed that temperature influences the dynamic of epidemic significantly and system behavior analysis showed social network degree is a critical factor determining the size of an epidemic. Finally, multiple scenarios for vaccination and school closure strategies were simulated and their performance was analyzed.
Analysis of the performance of a wireless optical multi-input to multi-output communication system.
Bushuev, Denis; Arnon, Shlomi
2006-07-01
We investigate robust optical wireless communication in a highly scattering propagation medium using multielement optical detector arrays. The communication setup consists of synchronized multiple transmitters that send information to a receiver array and an atmospheric propagation channel. The mathematical model that best describes this scenario is multi-input to multi-output communication through stochastic slow changing channels. In this model, signals from m transmitters are received by n receiver-detectors. The channel transfer function matrix is G, and its size is n x m. G(i,j) is the transfer function from transmitter i to detector j, and m > or = n. We adopt a quasi-stationary approach in which the channel time variation has a negligible effect on communication performance over a burst. The G matrix is calculated on the basis of the optical transfer function of the atmospheric channel (composed of aerosol and turbulence elements) and the receiver's optics. In this work we derive a performance model using environmental data, such as documented turbulence and aerosol models and noise statistics. We also present the results of simulations conducted for the proposed detection algorithm.
NASA Astrophysics Data System (ADS)
Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.
2009-09-01
SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.
NASA Technical Reports Server (NTRS)
Mohr, Karen Irene; Tao, Wei-Kuo; Chern, Jiun-Dar; Kumar, Sujay V.; Peters-Lidard, Christa D.
2013-01-01
The present generation of general circulation models (GCM) use parameterized cumulus schemes and run at hydrostatic grid resolutions. To improve the representation of cloud-scale moist processes and landeatmosphere interactions, a global, Multi-scale Modeling Framework (MMF) coupled to the Land Information System (LIS) has been developed at NASA-Goddard Space Flight Center. The MMFeLIS has three components, a finite-volume (fv) GCM (Goddard Earth Observing System Ver. 4, GEOS-4), a 2D cloud-resolving model (Goddard Cumulus Ensemble, GCE), and the LIS, representing the large-scale atmospheric circulation, cloud processes, and land surface processes, respectively. The non-hydrostatic GCE model replaces the single-column cumulus parameterization of fvGCM. The model grid is composed of an array of fvGCM gridcells each with a series of embedded GCE models. A horizontal coupling strategy, GCE4fvGCM4Coupler4LIS, offered significant computational efficiency, with the scalability and I/O capabilities of LIS permitting landeatmosphere interactions at cloud-scale. Global simulations of 2007e2008 and comparisons to observations and reanalysis products were conducted. Using two different versions of the same land surface model but the same initial conditions, divergence in regional, synoptic-scale surface pressure patterns emerged within two weeks. The sensitivity of largescale circulations to land surface model physics revealed significant functional value to using a scalable, multi-model land surface modeling system in global weather and climate prediction.
On the nullspace of TLS multi-station adjustment
NASA Astrophysics Data System (ADS)
Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen
2018-07-01
In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.
Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures
NASA Astrophysics Data System (ADS)
Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi
2017-04-01
Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.
Land-Atmosphere Coupling in the Multi-Scale Modelling Framework
NASA Astrophysics Data System (ADS)
Kraus, P. M.; Denning, S.
2015-12-01
The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced conceptual gap between model resolution and parameterized processes.
Validation of Storm Water Management Model Storm Control Measures Modules
NASA Astrophysics Data System (ADS)
Simon, M. A.; Platz, M. C.
2017-12-01
EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.
Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario
2014-01-01
In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.
NASA Astrophysics Data System (ADS)
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
A multi-scale approach to designing therapeutics for tuberculosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
A multi-scale approach to designing therapeutics for tuberculosis
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje; ...
2015-04-20
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
NASA Astrophysics Data System (ADS)
Hussein, Rafid M.; Chandrashekhara, K.
2017-11-01
A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.
Tawhai, Merryn H; Bates, Jason H T
2011-05-01
Multi-scale modeling of biological systems has recently become fashionable due to the growing power of digital computers as well as to the growing realization that integrative systems behavior is as important to life as is the genome. While it is true that the behavior of a living organism must ultimately be traceable to all its components and their myriad interactions, attempting to codify this in its entirety in a model misses the insights gained from understanding how collections of system components at one level of scale conspire to produce qualitatively different behavior at higher levels. The essence of multi-scale modeling thus lies not in the inclusion of every conceivable biological detail, but rather in the judicious selection of emergent phenomena appropriate to the level of scale being modeled. These principles are exemplified in recent computational models of the lung. Airways responsiveness, for example, is an organ-level manifestation of events that begin at the molecular level within airway smooth muscle cells, yet it is not necessary to invoke all these molecular events to accurately describe the contraction dynamics of a cell, nor is it necessary to invoke all phenomena observable at the level of the cell to account for the changes in overall lung function that occur following methacholine challenge. Similarly, the regulation of pulmonary vascular tone has complex origins within the individual smooth muscle cells that line the blood vessels but, again, many of the fine details of cell behavior average out at the level of the organ to produce an effect on pulmonary vascular pressure that can be described in much simpler terms. The art of multi-scale lung modeling thus reduces not to being limitlessly inclusive, but rather to knowing what biological details to leave out.
Students’ Representation in Mathematical Word Problem-Solving: Exploring Students’ Self-efficacy
NASA Astrophysics Data System (ADS)
Sahendra, A.; Budiarto, M. T.; Fuad, Y.
2018-01-01
This descriptive qualitative research aims at investigating student represented in mathematical word problem solving based on self-efficacy. The research subjects are two eighth graders at a school in Surabaya with equal mathematical ability consisting of two female students with high and low self-efficacy. The subjects were chosen based on the results of test of mathematical ability, documentation of the result of middle test in even semester of 2016/2017 academic year, and results of questionnaire of mathematics word problem in terms of self-efficacy scale. The selected students were asked to do mathematical word problem solving and be interviewed. The result of this study shows that students with high self-efficacy tend to use multiple representations of sketches and mathematical models, whereas students with low self-efficacy tend to use single representation of sketches or mathematical models only in mathematical word problem-solving. This study emphasizes that teachers should pay attention of student’s representation as a consideration of designing innovative learning in order to increase the self-efficacy of each student to achieve maximum mathematical achievement although it still requires adjustment to the school situation and condition.
NASA Astrophysics Data System (ADS)
Kim, S. C.; Hayter, E. J.; Pruhs, R.; Luong, P.; Lackey, T. C.
2016-12-01
The geophysical scale circulation of the Mid Atlantic Bight and hydrologic inputs from adjacent Chesapeake Bay watersheds and tributaries influences the hydrodynamics and transport of the James River estuary. Both barotropic and baroclinic transport govern the hydrodynamics of this partially stratified estuary. Modeling the placement of dredged sediment requires accommodating this wide spectrum of atmospheric and hydrodynamic scales. The Geophysical Scale Multi-Block (GSMB) Transport Modeling System is a collection of multiple well established and USACE approved process models. Taking advantage of the parallel computing capability of multi-block modeling, we performed one year three-dimensional modeling of hydrodynamics in supporting simulation of dredged sediment placements transport and morphology changes. Model forcing includes spatially and temporally varying meteorological conditions and hydrological inputs from the watershed. Surface heat flux estimates were derived from the National Solar Radiation Database (NSRDB). The open water boundary condition for water level was obtained from an ADCIRC model application of the U. S. East Coast. Temperature-salinity boundary conditions were obtained from the Environmental Protection Agency (EPA) Chesapeake Bay Program (CBP) long-term monitoring stations database. Simulated water levels were calibrated and verified by comparison with National Oceanic and Atmospheric Administration (NOAA) tide gage locations. A harmonic analysis of the modeled tides was performed and compared with NOAA tide prediction data. In addition, project specific circulation was verified using US Army Corps of Engineers (USACE) drogue data. Salinity and temperature transport was verified at seven CBP long term monitoring stations along the navigation channel. Simulation and analysis of model results suggest that GSMB is capable of resolving the long duration, multi-scale processes inherent to practical engineering problems such as dredged material placement stability.
Dynamic testing for shuttle design verification
NASA Technical Reports Server (NTRS)
Green, C. E.; Leadbetter, S. A.; Rheinfurth, M. H.
1972-01-01
Space shuttle design verification requires dynamic data from full scale structural component and assembly tests. Wind tunnel and other scaled model tests are also required early in the development program to support the analytical models used in design verification. Presented is a design philosophy based on mathematical modeling of the structural system strongly supported by a comprehensive test program; some of the types of required tests are outlined.
The Effect of Lateral Boundary Values on Atmospheric Mercury Simulations with the CMAQ Model
Simulation results from three global-scale models of atmospheric mercury have been used to define three sets of initial condition and boundary condition (IC/BC) data for regional-scale model simulations over North America using the Community Multi-scale Air Quality (CMAQ) model. ...
A FRAMEWORK FOR FINE-SCALE COMPUTATIONAL FLUID DYNAMICS AIR QUALITY MODELING AND ANALYSIS
This paper discusses a framework for fine-scale CFD modeling that may be developed to complement the present Community Multi-scale Air Quality (CMAQ) modeling system which itself is a computational fluid dynamics model. A goal of this presentation is to stimulate discussions on w...
NASA Astrophysics Data System (ADS)
Huang, Y.; Liu, M.; Wada, Y.; He, X.; Sun, X.
2017-12-01
In recent decades, with rapid economic growth, industrial development and urbanization, expanding pollution of polycyclic aromatic hydrocarbons (PAHs) has become a diversified and complicated phenomenon in China. However, the availability of sufficient monitoring activities for PAHs in multi-compartment and the corresponding multi-interface migration processes are still limited, especially at a large geographic area. In this study, we couple the Multimedia Fate Model (MFM) to the Community Multi-Scale Air Quality (CMAQ) model in order to consider the fugacity and the transient contamination processes. This coupled dynamic contaminant model can evaluate the detailed local variations and mass fluxes of PAHs in different environmental media (e.g., air, surface film, soil, sediment, water and vegetation) across different spatial (a county to country) and temporal (days to years) scales. This model has been applied to a large geographical domain of China at a 36 km by 36 km grid resolution. The model considers response characteristics of typical environmental medium to complex underlying surface. Results suggest that direct emission is the main input pathway of PAHs entering the atmosphere, while advection is the main outward flow of pollutants from the environment. In addition, both soil and sediment act as the main sink of PAHs and have the longest retention time. Importantly, the highest PAHs loadings are found in urbanized and densely populated regions of China, such as Yangtze River Delta and Pearl River Delta. This model can provide a good scientific basis towards a better understanding of the large-scale dynamics of environmental pollutants for land conservation and sustainable development. In a next step, the dynamic contaminant model will be integrated with the continental-scale hydrological and water resources model (i.e., Community Water Model, CWatM) to quantify a more accurate representation and feedbacks between the hydrological cycle and water quality at even larger geographical domains. Keywords: PAHs; Community multi-scale air quality model; Multimedia fate model; Land use
Using Research-Based Instruction to Improve Math Outcomes with Underprepared Students
ERIC Educational Resources Information Center
Pearce, Lee R.; Pearce, Kristi L.; Siewert, Daluss J.
2017-01-01
The authors used a mixed-methods research design to evaluate a multi-tiered system of supports model to address the disturbing failure rates of underprepared college students placed in developmental mathematics at a small state university. While qualitative data gathered from using Participatory Action Research methods directed the two-year…
Rienksma, Rienk A; Suarez-Diez, Maria; Spina, Lucie; Schaap, Peter J; Martins dos Santos, Vitor A P
2014-12-01
Systems-level metabolic network reconstructions and the derived constraint-based (CB) mathematical models are efficient tools to explore bacterial metabolism. Approximately one-fourth of the Mycobacterium tuberculosis (Mtb) genome contains genes that encode proteins directly involved in its metabolism. These represent potential drug targets that can be systematically probed with CB models through the prediction of genes essential (or the combination thereof) for the pathogen to grow. However, gene essentiality depends on the growth conditions and, so far, no in vitro model precisely mimics the host at the different stages of mycobacterial infection, limiting model predictions. These limitations can be circumvented by combining expression data from in vivo samples with a validated CB model, creating an accurate description of pathogen metabolism in the host. To this end, we present here a thoroughly curated and extended genome-scale CB metabolic model of Mtb quantitatively validated using 13C measurements. We describe some of the efforts made in integrating CB models and high-throughput data to generate condition specific models, and we will discuss challenges ahead. This knowledge and the framework herein presented will enable to identify potential new drug targets, and will foster the development of optimal therapeutic strategies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Multi-scale Rule-of-Mixtures Model of Carbon Nanotube/Carbon Fiber/Epoxy Lamina
NASA Technical Reports Server (NTRS)
Frankland, Sarah-Jane V.; Roddick, Jaret C.; Gates, Thomas S.
2005-01-01
A unidirectional carbon fiber/epoxy lamina in which the carbon fibers are coated with single-walled carbon nanotubes is modeled with a multi-scale method, the atomistically informed rule-of-mixtures. This multi-scale model is designed to include the effect of the carbon nanotubes on the constitutive properties of the lamina. It included concepts from the molecular dynamics/equivalent continuum methods, micromechanics, and the strength of materials. Within the model both the nanotube volume fraction and nanotube distribution were varied. It was found that for a lamina with 60% carbon fiber volume fraction, the Young's modulus in the fiber direction varied with changes in the nanotube distribution, from 138.8 to 140 GPa with nanotube volume fractions ranging from 0.0001 to 0.0125. The presence of nanotube near the surface of the carbon fiber is therefore expected to have a small, but positive, effect on the constitutive properties of the lamina.
The Unit of Analysis in Mathematics Education: Bridging the Political-Technical Divide?
ERIC Educational Resources Information Center
Ernest, Paul
2016-01-01
Mathematics education is a complex, multi-disciplinary field of study which treats a wide range of diverse but interrelated areas. These include the nature of mathematics, the learning of mathematics, its teaching, and the social context surrounding both the discipline and applications of mathematics itself, as well as its teaching and learning.…
The Parallel System for Integrating Impact Models and Sectors (pSIMS)
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian
2014-01-01
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.
Cross-validating a bidimensional mathematics anxiety scale.
Haiyan Bai
2011-03-01
The psychometric properties of a 14-item bidimensional Mathematics Anxiety Scale-Revised (MAS-R) were empirically cross-validated with two independent samples consisting of 647 secondary school students. An exploratory factor analysis on the scale yielded strong construct validity with a clear two-factor structure. The results from a confirmatory factor analysis indicated an excellent model-fit (χ(2) = 98.32, df = 62; normed fit index = .92, comparative fit index = .97; root mean square error of approximation = .04). The internal consistency (.85), test-retest reliability (.71), interfactor correlation (.26, p < .001), and positive discrimination power indicated that MAS-R is a psychometrically reliable and valid instrument for measuring mathematics anxiety. Math anxiety, as measured by MAS-R, correlated negatively with student achievement scores (r = -.38), suggesting that MAS-R may be a useful tool for classroom teachers and other educational personnel tasked with identifying students at risk of reduced math achievement because of anxiety.
NASA Astrophysics Data System (ADS)
Granger, Victoria; Fromentin, Jean-Marc; Bez, Nicolas; Relini, Giulio; Meynard, Christine N.; Gaertner, Jean-Claude; Maiorano, Porzia; Garcia Ruiz, Cristina; Follesa, Cristina; Gristina, Michele; Peristeraki, Panagiota; Brind'Amour, Anik; Carbonara, Pierluigi; Charilaou, Charis; Esteban, Antonio; Jadaud, Angélique; Joksimovic, Aleksandar; Kallianiotis, Argyris; Kolitari, Jerina; Manfredi, Chiara; Massuti, Enric; Mifsud, Roberta; Quetglas, Antoni; Refes, Wahid; Sbrana, Mario; Vrgoc, Nedo; Spedicato, Maria Teresa; Mérigot, Bastien
2015-01-01
Increasing human pressures and global environmental change may severely affect the diversity of species assemblages and associated ecosystem services. Despite the recent interest in phylogenetic and functional diversity, our knowledge on large spatio-temporal patterns of demersal fish diversity sampled by trawling remains still incomplete, notably in the Mediterranean Sea, one of the most threatened marine regions of the world. We investigated large spatio-temporal diversity patterns by analysing a dataset of 19,886 hauls from 10 to 800 m depth performed annually during the last two decades by standardised scientific bottom trawl field surveys across the Mediterranean Sea, within the MEDITS program. A multi-component (eight diversity indices) and multi-scale (local assemblages, biogeographic regions to basins) approach indicates that only the two most traditional components (species richness and evenness) were sufficient to reflect patterns in taxonomic, phylogenetic or functional richness and divergence. We also put into question the use of widely computed indices that allow comparing directly taxonomic, phylogenetic and functional diversity within a unique mathematical framework. In addition, demersal fish assemblages sampled by trawl do not follow a continuous decreasing longitudinal/latitudinal diversity gradients (spatial effects explained up to 70.6% of deviance in regression tree and generalised linear models), for any of the indices and spatial scales analysed. Indeed, at both local and regional scales species richness was relatively high in the Iberian region, Malta, the Eastern Ionian and Aegean seas, meanwhile the Adriatic Sea and Cyprus showed a relatively low level. In contrast, evenness as well as taxonomic, phylogenetic and functional divergences did not show regional hotspots. All studied diversity components remained stable over the last two decades. Overall, our results highlight the need to use complementary diversity indices through different spatial scales when developing conservation strategies and defining delimitations for protected areas.
Attitudes toward and approaches to learning first-year university mathematics.
Alkhateeb, Haitham M; Hammoudi, Lakhdar
2006-08-01
This study examined the relationship for 180 undergraduate students enrolled in a first-year university calculus course between attitudes toward mathematics and approaches to learning mathematics using the Mathematics Attitude Scale and the Approaches to Learning Mathematics Questionnaire, respectively. Regression analyses indicated that scores for the Mathematics Attitude Scale were negatively related to scores for the Surface Approach and accounted for 10.4% of the variance and scores for the Mathematics Attitude Scale were positively related to scores for the Deep Approach to learning mathematics and accounted for 31.7% of the variance.
Analyzing the quality robustness of chemotherapy plans with respect to model uncertainties.
Hoffmann, Anna; Scherrer, Alexander; Küfer, Karl-Heinz
2015-01-01
Mathematical models of chemotherapy planning problems contain various biomedical parameters, whose values are difficult to quantify and thus subject to some uncertainty. This uncertainty propagates into the therapy plans computed on these models, which poses the question of robustness to the expected therapy quality. This work introduces a combined approach for analyzing the quality robustness of plans in terms of dosing levels with respect to model uncertainties in chemotherapy planning. It uses concepts from multi-criteria decision making for studying parameters related to the balancing between the different therapy goals, and concepts from sensitivity analysis for the examination of parameters describing the underlying biomedical processes and their interplay. This approach allows for a profound assessment of a therapy plan, how stable its quality is with respect to parametric changes in the used mathematical model. Copyright © 2014 Elsevier Inc. All rights reserved.
High dimensional model representation method for fuzzy structural dynamics
NASA Astrophysics Data System (ADS)
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
Multi-Scale Modeling of Liquid Phase Sintering Affected by Gravity: Preliminary Analysis
NASA Technical Reports Server (NTRS)
Olevsky, Eugene; German, Randall M.
2012-01-01
A multi-scale simulation concept taking into account impact of gravity on liquid phase sintering is described. The gravity influence can be included at both the micro- and macro-scales. At the micro-scale, the diffusion mass-transport is directionally modified in the framework of kinetic Monte-Carlo simulations to include the impact of gravity. The micro-scale simulations can provide the values of the constitutive parameters for macroscopic sintering simulations. At the macro-scale, we are attempting to embed a continuum model of sintering into a finite-element framework that includes the gravity forces and substrate friction. If successful, the finite elements analysis will enable predictions relevant to space-based processing, including size and shape and property predictions. Model experiments are underway to support the models via extraction of viscosity moduli versus composition, particle size, heating rate, temperature and time.
Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame
NASA Astrophysics Data System (ADS)
Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank
2017-10-01
This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
Cellular automata-based modelling and simulation of biofilm structure on multi-core computers.
Skoneczny, Szymon
2015-01-01
The article presents a mathematical model of biofilm growth for aerobic biodegradation of a toxic carbonaceous substrate. Modelling of biofilm growth has fundamental significance in numerous processes of biotechnology and mathematical modelling of bioreactors. The process following double-substrate kinetics with substrate inhibition proceeding in a biofilm has not been modelled so far by means of cellular automata. Each process in the model proposed, i.e. diffusion of substrates, uptake of substrates, growth and decay of microorganisms and biofilm detachment, is simulated in a discrete manner. It was shown that for flat biofilm of constant thickness, the results of the presented model agree with those of a continuous model. The primary outcome of the study was to propose a mathematical model of biofilm growth; however a considerable amount of focus was also placed on the development of efficient algorithms for its solution. Two parallel algorithms were created, differing in the way computations are distributed. Computer programs were created using OpenMP Application Programming Interface for C++ programming language. Simulations of biofilm growth were performed on three high-performance computers. Speed-up coefficients of computer programs were compared. Both algorithms enabled a significant reduction of computation time. It is important, inter alia, in modelling and simulation of bioreactor dynamics.
Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements
NASA Astrophysics Data System (ADS)
Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.
2012-12-01
The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.
ERIC Educational Resources Information Center
Laursen, Sandra L.; Hassi, Marja-Liisa; Kogan, Marina; Weston, Timothy J.
2014-01-01
Slow faculty uptake of research-based, student-centered teaching and learning approaches limits the advancement of U.S. undergraduate mathematics education. A study of inquiry-based learning (IBL) as implemented in over 100 course sections at 4 universities provides an example of such multicourse, multi-institution uptake. Despite variation in how…
The Impact of Using Multi-Sensory Approach for Teaching Students with Learning Disabilities
ERIC Educational Resources Information Center
Obaid, Majeda Al Sayyed
2013-01-01
The purpose of this study is to investigate the effect of using the Multi-Sensory Approach for teaching students with learning disabilities on the sixth grade students' achievement in mathematics at Jordanian public schools. To achieve the purpose of the study, a pre/post-test was constructed to measure students' achievement in mathematics. The…
Cerda, Gamal; Pérez, Carlos; Navarro, José I.; Aguilar, Manuel; Casas, José A.; Aragón, Estíbaliz
2015-01-01
This study tested a structural model of cognitive-emotional explanatory variables to explain performance in mathematics. The predictor variables assessed were related to students’ level of development of early mathematical competencies (EMCs), specifically, relational and numerical competencies, predisposition toward mathematics, and the level of logical intelligence in a population of primary school Chilean students (n = 634). This longitudinal study also included the academic performance of the students during a period of 4 years as a variable. The sampled students were initially assessed by means of an Early Numeracy Test, and, subsequently, they were administered a Likert-type scale to measure their predisposition toward mathematics (EPMAT) and a basic test of logical intelligence. The results of these tests were used to analyse the interaction of all the aforementioned variables by means of a structural equations model. This combined interaction model was able to predict 64.3% of the variability of observed performance. Preschool students’ performance in EMCs was a strong predictor for achievement in mathematics for students between 8 and 11 years of age. Therefore, this paper highlights the importance of EMCs and the modulating role of predisposition toward mathematics. Also, this paper discusses the educational role of these findings, as well as possible ways to improve negative predispositions toward mathematical tasks in the school domain. PMID:26441739
NASA Astrophysics Data System (ADS)
Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan
2014-09-01
A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.
Biological adaptive control model: a mechanical analogue of multi-factorial bone density adaptation.
Davidson, Peter L; Milburn, Peter D; Wilson, Barry D
2004-03-21
The mechanism of how bone adapts to every day demands needs to be better understood to gain insight into situations in which the musculoskeletal system is perturbed. This paper offers a novel multi-factorial mathematical model of bone density adaptation which combines previous single-factor models in a single adaptation system as a means of gaining this insight. Unique aspects of the model include provision for interaction between factors and an estimation of the relative contribution of each factor. This interacting system is considered analogous to a Newtonian mechanical system and the governing response equation is derived as a linear version of the adaptation process. The transient solution to sudden environmental change is found to be exponential or oscillatory depending on the balance between cellular activation and deactivation frequencies.
ERIC Educational Resources Information Center
Woolcott, Geoff; Yeigh, Tony
2015-01-01
This article reports on initial findings, including the mathematics components, of a multi-institutional Science, Technology, Engineering, and Mathematics (STEM) project, "It's part of my life: Engaging university and community to enhance science and mathematics education." This project is focussed on improving the scientific and…
Multi-Scale Modeling of an Integrated 3D Braided Composite with Applications to Helicopter Arm
NASA Astrophysics Data System (ADS)
Zhang, Diantang; Chen, Li; Sun, Ying; Zhang, Yifan; Qian, Kun
2017-10-01
A study is conducted with the aim of developing multi-scale analytical method for designing the composite helicopter arm with three-dimensional (3D) five-directional braided structure. Based on the analysis of 3D braided microstructure, the multi-scale finite element modeling is developed. Finite element analysis on the load capacity of 3D five-directional braided composites helicopter arm is carried out using the software ABAQUS/Standard. The influences of the braiding angle and loading condition on the stress and strain distribution of the helicopter arm are simulated. The results show that the proposed multi-scale method is capable of accurately predicting the mechanical properties of 3D braided composites, validated by the comparison the stress-strain curves of meso-scale RVCs. Furthermore, it is found that the braiding angle is an important factor affecting the mechanical properties of 3D five-directional braided composite helicopter arm. Based on the optimized structure parameters, the nearly net-shaped composite helicopter arm is fabricated using a novel resin transfer mould (RTM) process.
Neural network modeling for surgical decisions on traumatic brain injury patients.
Li, Y C; Liu, L; Chiu, W T; Jian, W S
2000-01-01
Computerized medical decision support systems have been a major research topic in recent years. Intelligent computer programs were implemented to aid physicians and other medical professionals in making difficult medical decisions. This report compares three different mathematical models for building a traumatic brain injury (TBI) medical decision support system (MDSS). These models were developed based on a large TBI patient database. This MDSS accepts a set of patient data such as the types of skull fracture, Glasgow Coma Scale (GCS), episode of convulsion and return the chance that a neurosurgeon would recommend an open-skull surgery for this patient. The three mathematical models described in this report including a logistic regression model, a multi-layer perceptron (MLP) neural network and a radial-basis-function (RBF) neural network. From the 12,640 patients selected from the database. A randomly drawn 9480 cases were used as the training group to develop/train our models. The other 3160 cases were in the validation group which we used to evaluate the performance of these models. We used sensitivity, specificity, areas under receiver-operating characteristics (ROC) curve and calibration curves as the indicator of how accurate these models are in predicting a neurosurgeon's decision on open-skull surgery. The results showed that, assuming equal importance of sensitivity and specificity, the logistic regression model had a (sensitivity, specificity) of (73%, 68%), compared to (80%, 80%) from the RBF model and (88%, 80%) from the MLP model. The resultant areas under ROC curve for logistic regression, RBF and MLP neural networks are 0.761, 0.880 and 0.897, respectively (P < 0.05). Among these models, the logistic regression has noticeably poorer calibration. This study demonstrated the feasibility of applying neural networks as the mechanism for TBI decision support systems based on clinical databases. The results also suggest that neural networks may be a better solution for complex, non-linear medical decision support systems than conventional statistical techniques such as logistic regression.
Validation of mathematical model for CZ process using small-scale laboratory crystal growth furnace
NASA Astrophysics Data System (ADS)
Bergfelds, Kristaps; Sabanskis, Andrejs; Virbulis, Janis
2018-05-01
The present material is focused on the modelling of small-scale laboratory NaCl-RbCl crystal growth furnace. First steps towards fully transient simulations are taken in the form of stationary simulations that deal with the optimization of material properties to match the model to experimental conditions. For this purpose, simulation software primarily used for the modelling of industrial-scale silicon crystal growth process was successfully applied. Finally, transient simulations of the crystal growth are presented, giving a sufficient agreement to experimental results.
On mathematical modelling of flameless combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mancini, Marco; Schwoeppe, Patrick; Weber, Roman
2007-07-15
A further analysis of the IFRF semi-industrial-scale experiments on flameless (mild) combustion of natural gas is carried out. The experimental burner features a strong oxidizer jet and two weak natural gas jets. Numerous publications have shown the inability of various RANS-based mathematical models to predict the structure of the weak jet. We have proven that the failure is in error predictions of the entrainment and therefore is not related to any chemistry submodels, as has been postulated. (author)
Development and Validation of the Mathematical Resilience Scale
ERIC Educational Resources Information Center
Kooken, Janice; Welsh, Megan E.; McCoach, D. Betsy; Johnston-Wilder, Sue; Lee, Clare
2016-01-01
The Mathematical Resilience Scale measures students' attitudes toward studying mathematics, using three correlated factors: Value, Struggle, and Growth. The Mathematical Resilience Scale was developed and validated using exploratory and confirmatory factor analyses across three samples. Results provide a new approach to gauge the likelihood of…
Structural impact and crashworthiness. Volume 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, J.
1984-01-01
These papers here were given at a conference on materials testing. The topics covered are mathematical modelling of materials, impact tests on pipes, and drop tests on scale models of lead shielded containers for radioactive materials.
Evaluation of a Multi-Axial, Temperature, and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.; Rudolphi, Michael (Technical Monitor)
2002-01-01
To obtain a better understanding the response of the structural adhesives used in the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle, an extensive effort has been conducted to characterize in detail the failure properties of these adhesives. This effort involved the development of a failure model that includes the effects of multi-axial loading, temperature, and time. An understanding of the effects of these parameters on the failure of the adhesive is crucial to the understanding and prediction of the safety of the RSRM nozzle. This paper documents the use of this newly developed multi-axial, temperature, and time (MATT) dependent failure model for modeling failure for the adhesives TIGA 321, EA913NA, and EA946. The development of the mathematical failure model using constant load rate normal and shear test data is presented. Verification of the accuracy of the failure model is shown through comparisons between predictions and measured creep and multi-axial failure data. The verification indicates that the failure model performs well for a wide range of conditions (loading, temperature, and time) for the three adhesives. The failure criterion is shown to be accurate through the glass transition for the adhesive EA946. Though this failure model has been developed and evaluated with adhesives, the concepts are applicable for other isotropic materials.
The Teachers Academy for Mathematics and Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-01-01
In his State of the Union address on January 31, 1990, President Bush set a goal for US students to be number one in the world in mathematics and science achievement by the year 2000. The Teachers Academy for Mathematics and Science in Chicago is an experiment of unprecedented boldness and scale that can provide a means to the President's goal, both for the Chicago area and as a national model. This document covers organization and governance, program activities, future training goals, and evaluation programs.
Evaluating multi-level models to test occupancy state responses of Plethodontid salamanders
Kroll, Andrew J.; Garcia, Tiffany S.; Jones, Jay E.; Dugger, Catherine; Murden, Blake; Johnson, Josh; Peerman, Summer; Brintz, Ben; Rochelle, Michael
2015-01-01
Plethodontid salamanders are diverse and widely distributed taxa and play critical roles in ecosystem processes. Due to salamander use of structurally complex habitats, and because only a portion of a population is available for sampling, evaluation of sampling designs and estimators is critical to provide strong inference about Plethodontid ecology and responses to conservation and management activities. We conducted a simulation study to evaluate the effectiveness of multi-scale and hierarchical single-scale occupancy models in the context of a Before-After Control-Impact (BACI) experimental design with multiple levels of sampling. Also, we fit the hierarchical single-scale model to empirical data collected for Oregon slender and Ensatina salamanders across two years on 66 forest stands in the Cascade Range, Oregon, USA. All models were fit within a Bayesian framework. Estimator precision in both models improved with increasing numbers of primary and secondary sampling units, underscoring the potential gains accrued when adding secondary sampling units. Both models showed evidence of estimator bias at low detection probabilities and low sample sizes; this problem was particularly acute for the multi-scale model. Our results suggested that sufficient sample sizes at both the primary and secondary sampling levels could ameliorate this issue. Empirical data indicated Oregon slender salamander occupancy was associated strongly with the amount of coarse woody debris (posterior mean = 0.74; SD = 0.24); Ensatina occupancy was not associated with amount of coarse woody debris (posterior mean = -0.01; SD = 0.29). Our simulation results indicate that either model is suitable for use in an experimental study of Plethodontid salamanders provided that sample sizes are sufficiently large. However, hierarchical single-scale and multi-scale models describe different processes and estimate different parameters. As a result, we recommend careful consideration of study questions and objectives prior to sampling data and fitting models.
Patient-Specific, Multi-Scale Modeling of Neointimal Hyperplasia in Vein Grafts
Donadoni, Francesca; Pichardo-Almarza, Cesar; Bartlett, Matthew; Dardik, Alan; Homer-Vanniasinkam, Shervanthi; Díaz-Zuccarini, Vanessa
2017-01-01
Neointimal hyperplasia is amongst the major causes of failure of bypass grafts. The disease progression varies from patient to patient due to a range of different factors. In this paper, a mathematical model will be used to understand neointimal hyperplasia in individual patients, combining information from biological experiments and patient-specific data to analyze some aspects of the disease, particularly with regard to mechanical stimuli due to shear stresses on the vessel wall. By combining a biochemical model of cell growth and a patient-specific computational fluid dynamics analysis of blood flow in the lumen, remodeling of the blood vessel is studied by means of a novel computational framework. The framework was used to analyze two vein graft bypasses from one patient: a femoro-popliteal and a femoro-distal bypass. The remodeling of the vessel wall and analysis of the flow for each case was then compared to clinical data and discussed as a potential tool for a better understanding of the disease. Simulation results from this first computational approach showed an overall agreement on the locations of hyperplasia in these patients and demonstrated the potential of using new integrative modeling tools to understand disease progression. PMID:28458640
Simulation of the optical coating deposition
NASA Astrophysics Data System (ADS)
Grigoriev, Fedor; Sulimov, Vladimir; Tikhonravov, Alexander
2018-04-01
A brief review of the mathematical methods of thin-film growth simulation and results of their applications is presented. Both full-atomistic and multi-scale approaches that were used in the studies of thin-film deposition are considered. The results of the structural parameter simulation including density profiles, roughness, porosity, point defect concentration, and others are discussed. The application of the quantum level methods to the simulation of the thin-film electronic and optical properties is considered. Special attention is paid to the simulation of the silicon dioxide thin films.
A decision support for an integrated multi-scale analysis of irrigation: DSIRR.
Bazzani, Guido M
2005-12-01
The paper presents a decision support designed to conduct an economic-environmental assessment of the agricultural activity focusing on irrigation called 'Decision Support for IRRigated Agriculture' (DSIRR). The program describes the effect at catchment scale of choices taken at micro scale by independent actors, the farmers, by simulating their decision process. The decision support (DS) has been thought of as a support tool for participatory water policies as requested by the Water Framework Directive and it aims at analyzing alternatives in production and technology, according to different market, policy and climate conditions. The tool uses data and models, provides a graphical user interface and can incorporate the decision makers' own insights. Heterogeneity in preferences is admitted since it is assumed that irrigators try to optimize personal multi-attribute utility functions, subject to a set of constraints. Consideration of agronomic and engineering aspects allows an accurate description of irrigation. Mathematical programming techniques are applied to find solutions. The program has been applied in the river Po basin (northern Italy) to analyze the impact of a pricing policy in a context of irrigation technology innovation. Water demand functions and elasticity to water price have been estimated. Results demonstrate how different areas and systems react to the same policy in quite a different way. While in the annual cropping system pricing seems effective to save the resource at the cost of impeding Water Agencies cost recovery, the same policy has an opposite effect in the perennial fruit system which shows an inelastic response to water price. The multidimensional assessment conducted clarified the trades-off among conflicting economic-social-environmental objectives, thus generating valuable information to design a more tailored mix of measures.