ERIC Educational Resources Information Center
Sella, Francesco; Sader, Elie; Lolliot, Simon; Cohen Kadosh, Roi
2016-01-01
Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic…
2016-01-01
Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. PMID:26913930
Sella, Francesco; Sader, Elie; Lolliot, Simon; Cohen Kadosh, Roi
2016-09-01
Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Typical and Atypical Development of Basic Numerical Skills in Elementary School
ERIC Educational Resources Information Center
Landerl, Karin; Kolle, Christina
2009-01-01
Deficits in basic numerical processing have been identified as a central and potentially causal problem in developmental dyscalculia; however, so far not much is known about the typical and atypical development of such skills. This study assessed basic number skills cross-sectionally in 262 typically developing and 51 dyscalculic children in…
Cognitive correlates of performance in advanced mathematics.
Wei, Wei; Yuan, Hongbo; Chen, Chuansheng; Zhou, Xinlin
2012-03-01
Much research has been devoted to understanding cognitive correlates of elementary mathematics performance, but little such research has been done for advanced mathematics (e.g., modern algebra, statistics, and mathematical logic). To promote mathematical knowledge among college students, it is necessary to understand what factors (including cognitive factors) are important for acquiring advanced mathematics. We recruited 80 undergraduates from four universities in Beijing. The current study investigated the associations between students' performance on a test of advanced mathematics and a battery of 17 cognitive tasks on basic numerical processing, complex numerical processing, spatial abilities, language abilities, and general cognitive processing. The results showed that spatial abilities were significantly correlated with performance in advanced mathematics after controlling for other factors. In addition, certain language abilities (i.e., comprehension of words and sentences) also made unique contributions. In contrast, basic numerical processing and computation were generally not correlated with performance in advanced mathematics. Results suggest that spatial abilities and language comprehension, but not basic numerical processing, may play an important role in advanced mathematics. These results are discussed in terms of their theoretical significance and practical implications. ©2011 The British Psychological Society.
ERIC Educational Resources Information Center
Moeller, K.; Pixner, S.; Zuber, J.; Kaufmann, L.; Nuerk, H. C.
2011-01-01
It is assumed that basic numerical competencies are important building blocks for more complex arithmetic skills. The current study aimed at evaluating this interrelation in a longitudinal approach. It was investigated whether first graders' performance in basic numerical tasks in general as well as specific processes involved (e.g., place-value…
Kuhn, Jörg-Tobias; Ise, Elena; Raddatz, Julia; Schwenk, Christin; Dobel, Christian
2016-09-01
Deficits in basic numerical skills, calculation, and working memory have been found in children with developmental dyscalculia (DD) as well as children with attention-deficit/hyperactivity disorder (ADHD). This paper investigates cognitive profiles of children with DD and/or ADHD symptoms (AS) in a double dissociation design to obtain a better understanding of the comorbidity of DD and ADHD. Children with DD-only (N = 33), AS-only (N = 16), comorbid DD+AS (N = 20), and typically developing controls (TD, N = 40) were assessed on measures of basic numerical processing, calculation, working memory, processing speed, and neurocognitive measures of attention. Children with DD (DD, DD+AS) showed deficits in all basic numerical skills, calculation, working memory, and sustained attention. Children with AS (AS, DD+AS) displayed more selective difficulties in dot enumeration, subtraction, verbal working memory, and processing speed. Also, they generally performed more poorly in neurocognitive measures of attention, especially alertness. Children with DD+AS mostly showed an additive combination of the deficits associated with DD-only and A_Sonly, except for subtraction tasks, in which they were less impaired than expected. DD and AS appear to be related to largely distinct patterns of cognitive deficits, which are present in combination in children with DD+AS.
Developmental dyscalculia and basic numerical capacities: a study of 8-9-year-old students.
Landerl, Karin; Bevan, Anna; Butterworth, Brian
2004-09-01
Thirty-one 8- and 9-year-old children selected for dyscalculia, reading difficulties or both, were compared to controls on a range of basic number processing tasks. Children with dyscalculia only had impaired performance on the tasks despite high-average performance on tests of IQ, vocabulary and working memory tasks. Children with reading disability were mildly impaired only on tasks that involved articulation, while children with both disorders showed a pattern of numerical disability similar to that of the dyscalculic group, with no special features consequent on their reading or language deficits. We conclude that dyscalculia is the result of specific disabilities in basic numerical processing, rather than the consequence of deficits in other cognitive abilities.
Insights into numerical cognition: considering eye-fixations in number processing and arithmetic.
Mock, J; Huber, S; Klein, E; Moeller, K
2016-05-01
Considering eye-fixation behavior is standard in reading research to investigate underlying cognitive processes. However, in numerical cognition research eye-tracking is used less often and less systematically. Nevertheless, we identified over 40 studies on this topic from the last 40 years with an increase of eye-tracking studies on numerical cognition during the last decade. Here, we review and discuss these empirical studies to evaluate the added value of eye-tracking for the investigation of number processing. Our literature review revealed that the way eye-fixation behavior is considered in numerical cognition research ranges from investigating basic perceptual aspects of processing non-symbolic and symbolic numbers, over assessing the common representational space of numbers and space, to evaluating the influence of characteristics of the base-10 place-value structure of Arabic numbers and executive control on number processing. Apart from basic results such as reading times of numbers increasing with their magnitude, studies revealed that number processing can influence domain-general processes such as attention shifting-but also the other way round. Domain-general processes such as cognitive control were found to affect number processing. In summary, eye-fixation behavior allows for new insights into both domain-specific and domain-general processes involved in number processing. Based thereon, a processing model of the temporal dynamics of numerical cognition is postulated, which distinguishes an early stage of stimulus-driven bottom-up processing from later more top-down controlled stages. Furthermore, perspectives for eye-tracking research in numerical cognition are discussed to emphasize the potential of this methodology for advancing our understanding of numerical cognition.
ERIC Educational Resources Information Center
Simon, T. J.; Takarae, Y.; DeBoer, T.; McDonald-McGinn, D. M.; Zackai, E. H.; Ross, J. L.
2008-01-01
Children with one of two genetic disorders (chromosome 22q11.2 deletion syndrome and Turner syndrome) as well typically developing controls, participated in three cognitive processing experiments. Two experiments were designed to test cognitive processes involved in basic aspects numerical cognition. The third was a test of simple manual motor…
Ashkenazi, Sarit
2018-02-05
Current theoretical approaches suggest that mathematical anxiety (MA) manifests itself as a weakness in quantity manipulations. This study is the first to examine automatic versus intentional processing of numerical information using the numerical Stroop paradigm in participants with high MA. To manipulate anxiety levels, we combined the numerical Stroop task with an affective priming paradigm. We took a group of college students with high MA and compared their performance to a group of participants with low MA. Under low anxiety conditions (neutral priming), participants with high MA showed relatively intact number processing abilities. However, under high anxiety conditions (mathematical priming), participants with high MA showed (1) higher processing of the non-numerical irrelevant information, which aligns with the theoretical view regarding deficits in selective attention in anxiety and (2) an abnormal numerical distance effect. These results demonstrate that abnormal, basic numerical processing in MA is context related.
Standardized Radiation Shield Design Methods: 2005 HZETRN
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.
2006-01-01
Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.
Basic and Exceptional Calculation Abilities in a Calculating Prodigy: A Case Study.
ERIC Educational Resources Information Center
Pesenti, Mauro; Seron, Xavier; Samson, Dana; Duroux, Bruno
1999-01-01
Describes the basic and exceptional calculation abilities of a calculating prodigy whose performances were investigated in single- and multi-digit number multiplication, numerical comparison, raising of powers, and short-term memory tasks. Shows how his highly efficient long-term memory storage and retrieval processes, knowledge of calculation…
How Math Anxiety Relates to Number-Space Associations.
Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine
2016-01-01
Given the considerable prevalence of math anxiety, it is important to identify the factors contributing to it in order to improve mathematical learning. Research on math anxiety typically focusses on the effects of more complex arithmetic skills. Recent evidence, however, suggests that deficits in basic numerical processing and spatial skills also constitute potential risk factors of math anxiety. Given these observations, we determined whether math anxiety also depends on the quality of spatial-numerical associations. Behavioral evidence for a tight link between numerical and spatial representations is given by the SNARC (spatial-numerical association of response codes) effect, characterized by faster left-/right-sided responses for small/large digits respectively in binary classification tasks. We compared the strength of the SNARC effect between high and low math anxious individuals using the classical parity judgment task in addition to evaluating their spatial skills, arithmetic performance, working memory and inhibitory control. Greater math anxiety was significantly associated with stronger spatio-numerical interactions. This finding adds to the recent evidence supporting a link between math anxiety and basic numerical abilities and strengthens the idea that certain characteristics of low-level number processing such as stronger number-space associations constitute a potential risk factor of math anxiety.
How Math Anxiety Relates to Number–Space Associations
Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine
2016-01-01
Given the considerable prevalence of math anxiety, it is important to identify the factors contributing to it in order to improve mathematical learning. Research on math anxiety typically focusses on the effects of more complex arithmetic skills. Recent evidence, however, suggests that deficits in basic numerical processing and spatial skills also constitute potential risk factors of math anxiety. Given these observations, we determined whether math anxiety also depends on the quality of spatial-numerical associations. Behavioral evidence for a tight link between numerical and spatial representations is given by the SNARC (spatial-numerical association of response codes) effect, characterized by faster left-/right-sided responses for small/large digits respectively in binary classification tasks. We compared the strength of the SNARC effect between high and low math anxious individuals using the classical parity judgment task in addition to evaluating their spatial skills, arithmetic performance, working memory and inhibitory control. Greater math anxiety was significantly associated with stronger spatio-numerical interactions. This finding adds to the recent evidence supporting a link between math anxiety and basic numerical abilities and strengthens the idea that certain characteristics of low-level number processing such as stronger number–space associations constitute a potential risk factor of math anxiety. PMID:27683570
Developmental Dyscalculia and Basic Numerical Capacities: A Study of 8--9-Year-Old Students
ERIC Educational Resources Information Center
Landerl, Karin; Bevan, Anna; Butterworth, Brian
2004-01-01
Thirty-one 8- and 9-year-old children selected for dyscalculia, reading difficulties or both, were compared to controls on a range of basic number processing tasks. Children with dyscalculia only had impaired performance on the tasks despite high-average performance on tests of IQ, vocabulary and working memory tasks. Children with reading…
Numerical predictors of arithmetic success in grades 1-6.
Lyons, Ian M; Price, Gavin R; Vaessen, Anniek; Blomert, Leo; Ansari, Daniel
2014-09-01
Math relies on mastery and integration of a wide range of simpler numerical processes and concepts. Recent work has identified several numerical competencies that predict variation in math ability. We examined the unique relations between eight basic numerical skills and early arithmetic ability in a large sample (N = 1391) of children across grades 1-6. In grades 1-2, children's ability to judge the relative magnitude of numerical symbols was most predictive of early arithmetic skills. The unique contribution of children's ability to assess ordinality in numerical symbols steadily increased across grades, overtaking all other predictors by grade 6. We found no evidence that children's ability to judge the relative magnitude of approximate, nonsymbolic numbers was uniquely predictive of arithmetic ability at any grade. Overall, symbolic number processing was more predictive of arithmetic ability than nonsymbolic number processing, though the relative importance of symbolic number ability appears to shift from cardinal to ordinal processing. © 2014 John Wiley & Sons Ltd.
Basic numerical competences in large-scale assessment data: Structure and long-term relevance.
Hirsch, Stefa; Lambert, Katharina; Coppens, Karien; Moeller, Korbinian
2018-03-01
Basic numerical competences are seen as building blocks for later numerical and mathematical achievement. The current study aimed at investigating the structure of early numeracy reflected by different basic numerical competences in kindergarten and its predictive value for mathematical achievement 6 years later using data from large-scale assessment. This allowed analyses based on considerably large sample sizes (N > 1700). A confirmatory factor analysis indicated that a model differentiating five basic numerical competences at the end of kindergarten fitted the data better than a one-factor model of early numeracy representing a comprehensive number sense. In addition, these basic numerical competences were observed to reliably predict performance in a curricular mathematics test in Grade 6 even after controlling for influences of general cognitive ability. Thus, our results indicated a differentiated view on early numeracy considering basic numerical competences in kindergarten reflected in large-scale assessment data. Consideration of different basic numerical competences allows for evaluating their specific predictive value for later mathematical achievement but also mathematical learning difficulties. Copyright © 2017 Elsevier Inc. All rights reserved.
Spatial and numerical processing in children with high and low visuospatial abilities.
Crollen, Virginie; Noël, Marie-Pascale
2015-04-01
In the literature on numerical cognition, a strong association between numbers and space has been repeatedly demonstrated. However, only a few recent studies have been devoted to examine the consequences of low visuospatial abilities on calculation processing. In this study, we wanted to investigate whether visuospatial weakness may affect pure spatial processing as well as basic numerical reasoning. To do so, the performances of children with high and low visuospatial abilities were directly compared on different spatial tasks (the line bisection and Simon tasks) and numerical tasks (the number bisection, number-to-position, and numerical comparison tasks). Children from the low visuospatial group presented the classic Simon and SNARC (spatial numerical association of response codes) effects but showed larger deviation errors as compared with the high visuospatial group. Our results, therefore, demonstrated that low visuospatial abilities did not change the nature of the mental number line but rather led to a decrease in its accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bokiy, IB; Zoteev, OV; Pul, VV; Pul, EK
2018-03-01
The influence of structural features on the strength and elasticity modulus is studied in rock mass in the area of Mirny Mining and Processing Works. The authors make recommendations on the values of physical properties of rocks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peryshkin, A. Yu., E-mail: alexb700@yandex.ru; Makarov, P. V., E-mail: bacardi@ispms.ru; Eremin, M. O., E-mail: bacardi@ispms.ru
An evolutionary approach proposed in [1, 2] combining the achievements of traditional macroscopic theory of solid mechanics and basic ideas of nonlinear dynamics is applied in a numerical simulation of present-day tectonic plates motion and seismic process in Central Asia. Relative values of strength parameters of rigid blocks with respect to the soft zones were characterized by the δ parameter that was varied in the numerical experiments within δ = 1.1–1.8 for different groups of the zonal-block divisibility. In general, the numerical simulations of tectonic block motion and accompanying seismic process in the model geomedium indicate that the numerical solutionsmore » of the solid mechanics equations characterize its deformation as a typical behavior of a nonlinear dynamic system under conditions of self-organized criticality.« less
Mathematics anxiety affects counting but not subitizing during visual enumeration.
Maloney, Erin A; Risko, Evan F; Ansari, Daniel; Fugelsang, Jonathan
2010-02-01
Individuals with mathematics anxiety have been found to differ from their non-anxious peers on measures of higher-level mathematical processes, but not simple arithmetic. The current paper examines differences between mathematics anxious and non-mathematics anxious individuals in more basic numerical processing using a visual enumeration task. This task allows for the assessment of two systems of basic number processing: subitizing and counting. Mathematics anxious individuals, relative to non-mathematics anxious individuals, showed a deficit in the counting but not in the subitizing range. Furthermore, working memory was found to mediate this group difference. These findings demonstrate that the problems associated with mathematics anxiety exist at a level more basic than would be predicted from the extant literature. Copyright 2009 Elsevier B.V. All rights reserved.
On the status of knowledge for using punishment implications for treating behavior disorders.
Lerman, Dorothea C; Vorndran, Christina M
2002-01-01
In this paper, we review basic and applied findings on punishment and discuss the importance of conducting further research in this area. The characteristics of responding during punishment and numerous factors that interact with basic processes are delineated in conjunction with implications for the treatment of behavior disorders in clinical populations. We conclude that further understanding of punishment processes is needed to develop a highly systematic, effective technology of behavior change, including strategies for improving the efficacy of less intrusive procedures and for successfully fading treatment. PMID:12555918
The link between mental rotation ability and basic numerical representations
Thompson, Jacqueline M.; Nuerk, Hans-Christoph; Moeller, Korbinian; Cohen Kadosh, Roi
2013-01-01
Mental rotation and number representation have both been studied widely, but although mental rotation has been linked to higher-level mathematical skills, to date it has not been shown whether mental rotation ability is linked to the most basic mental representation and processing of numbers. To investigate the possible connection between mental rotation abilities and numerical representation, 43 participants completed four tasks: 1) a standard pen-and-paper mental rotation task; 2) a multi-digit number magnitude comparison task assessing the compatibility effect, which indicates separate processing of decade and unit digits; 3) a number-line mapping task, which measures precision of number magnitude representation; and 4) a random number generation task, which yields measures both of executive control and of spatial number representations. Results show that mental rotation ability correlated significantly with both size of the compatibility effect and with number mapping accuracy, but not with any measures from the random number generation task. Together, these results suggest that higher mental rotation abilities are linked to more developed number representation, and also provide further evidence for the connection between spatial and numerical abilities. PMID:23933002
Solar and chemical reaction-induced heating in the terrestrial mesosphere and lower thermosphere
NASA Technical Reports Server (NTRS)
Mlynczak, Martin G.
1992-01-01
Airglow and chemical processes in the terrestrial mesosphere and lower thermosphere are reviewed, and initial parameterizations of the processes applicable to multidimensional models are presented. The basic processes by which absorbed solar energy participates in middle atmosphere energetics for absorption events in which photolysis occurs are illustrated. An approach that permits the heating processes to be incorporated in numerical models is presented.
Ansari, Daniel; Dhital, Bibek
2006-11-01
Numerical magnitude processing is an essential everyday skill. Functional brain imaging studies with human adults have repeatedly revealed that bilateral regions of the intraparietal sulcus are correlated with various numerical and mathematical skills. Surprisingly little, however, is known about the development of these brain representations. In the present study, we used functional neuroimaging to compare the neural correlates of nonsymbolic magnitude judgments between children and adults. Although behavioral performance was similar across groups, in comparison to the group of children the adult participants exhibited greater effects of numerical distance on the left intraparietal sulcus. Our findings are the first to reveal that even the most basic aspects of numerical cognition are subject to age-related changes in functional neuroanatomy. We propose that developmental impairments of number may be associated with atypical specialization of cortical regions underlying magnitude processing.
Patterns of Learning. New Perspectives on Life-Span Education.
ERIC Educational Resources Information Center
Houle, Cyril O.
Basic methods of learning, most of which have been used through centuries of recorded thought, are discussed, along with learning as a lifelong process, and ways to enhance and diversify modern education. Numerous learning processes are studied by examining the lives of great individuals who have exemplified innovative and multifaceted approaches…
REVIEWS OF TOPICAL PROBLEMS: Physical aspects of cryobiology
NASA Astrophysics Data System (ADS)
Zhmakin, A. I.
2008-03-01
Physical phenomena during biological freezing and thawing processes at the molecular, cellular, tissue, and organ levels are examined. The basics of cryosurgery and cryopreservation of cells and tissues are presented. Existing cryobiological models, including numerical ones, are reviewed.
Parallel processing in finite element structural analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1987-01-01
A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).
The Influence of Schema and Cultural Difference on L1 and L2 Reading
ERIC Educational Resources Information Center
Yang, Shi-sheng
2010-01-01
Reading in L1 shares numerous basic elements with reading in L2, and the processes also differ greatly. Intriguing questions involve whether there are two parallel cognitive processes at work, or whether there are processing strategies that accommodate both L1 and L2. This paper examines how reading in L1 is different from and similar to reading…
Does attentional training improve numerical processing in developmental dyscalculia?
Ashkenazi, Sarit; Henik, Avishai
2012-01-01
Recently, a deficit in attention was found in those with pure developmental dyscalculia (DD). Accordingly, the present study aimed to examine the influence of attentional training on attention abilities, basic numerical abilities, and arithmetic in participants who were diagnosed as having DD. Nine university students diagnosed as having DD (IQ and reading abilities in the normal range and no indication of attention-deficit hyperactivity disorder) and nine matched controls participated in attentional training (i.e., video game training). First, training modulated the orienting system; after training, the size of the validity effect (i.e., effect of valid vs. invalid) decreased. This effect was comparable in the two groups. Training modulated abnormalities in the attention systems of those with DD, that is, it reduced their enlarged congruity effect (i.e., faster responding when flanking arrows pointed to the same location as a center arrow). Second, in relation to the enumeration task, training reduced the reaction time of the DD group in the subitizing range but did not change their smaller-than-normal subitizing range. Finally, training improved performance in addition problems in both the DD and control groups. These results imply that attentional training does improve most of the attentional deficits of those with DD. In contrast, training did not improve the abnormalities of the DD group in arithmetic or basic numerical processing. Thus, in contrast to the domain-general hypothesis, the deficits in attention among those with DD and the deficits in numerical processing appear to originate from different sources.
Phoneme Similarity and Confusability
ERIC Educational Resources Information Center
Bailey, T.M.; Hahn, U.
2005-01-01
Similarity between component speech sounds influences language processing in numerous ways. Explanation and detailed prediction of linguistic performance consequently requires an understanding of these basic similarities. The research reported in this paper contrasts two broad classes of approach to the issue of phoneme similarity-theoretically…
ERIC Educational Resources Information Center
MOSAIC, 1978
1978-01-01
Describes the basic concept of university-based innovation centers, which are sponsored by the NSF, to involve the students in the process of innovation and entrepreneurship. Gives numerous examples of success where the final outcome has been a new product or even a new company. (GA)
Dynamic Transitions and Baroclinic Instability for 3D Continuously Stratified Boussinesq Flows
NASA Astrophysics Data System (ADS)
Şengül, Taylan; Wang, Shouhong
2018-02-01
The main objective of this article is to study the nonlinear stability and dynamic transitions of the basic (zonal) shear flows for the three-dimensional continuously stratified rotating Boussinesq model. The model equations are fundamental equations in geophysical fluid dynamics, and dynamics associated with their basic zonal shear flows play a crucial role in understanding many important geophysical fluid dynamical processes, such as the meridional overturning oceanic circulation and the geophysical baroclinic instability. In this paper, first we derive a threshold for the energy stability of the basic shear flow, and obtain a criterion for local nonlinear stability in terms of the critical horizontal wavenumbers and the system parameters such as the Froude number, the Rossby number, the Prandtl number and the strength of the shear flow. Next, we demonstrate that the system always undergoes a dynamic transition from the basic shear flow to either a spatiotemporal oscillatory pattern or circle of steady states, as the shear strength of the basic flow crosses a critical threshold. Also, we show that the dynamic transition can be either continuous or catastrophic, and is dictated by the sign of a transition number, fully characterizing the nonlinear interactions of different modes. Both the critical shear strength and the transition number are functions of the system parameters. A systematic numerical method is carried out to explore transition in different flow parameter regimes. In particular, our numerical investigations show the existence of a hypersurface which separates the parameter space into regions where the basic shear flow is stable and unstable. Numerical investigations also yield that the selection of horizontal wave indices is determined only by the aspect ratio of the box. We find that the system admits only critical eigenmodes with roll patterns aligned with the x-axis. Furthermore, numerically we encountered continuous transitions to multiple steady states, as well as continuous and catastrophic transitions to spatiotemporal oscillations.
Debecker, Damien P; Gaigneaux, Eric M; Busca, Guido
2009-01-01
Basic catalysis! The basic properties of hydrotalcites (see picture) make them attractive for numerous catalytic applications. Probing the basicity of the catalysts is crucial to understand the base-catalysed processes and to optimise the catalyst preparation. Various parameters can be employed to tune the basic properties of hydrotalcite-based catalysts towards the basicity demanded by each target chemical reaction.Hydrotalcites offer unique basic properties that make them very attractive for catalytic applications. It is of primary interest to make use of accurate tools for probing the basicity of hydrotalcite-based catalysts for the purpose of 1) fundamental understanding of base-catalysed processes with hydrotalcites and 2) optimisation of the catalytic performance achieved in reactions of industrial interest. Techniques based on probe molecules, titration techniques and test reactions along with physicochemical characterisation are overviewed in the first part of this review. The aim is to provide the tools for understanding how series of parameters involved in the preparation of hydrotalcite-based catalytic materials can be employed to control and adapt the basic properties of the catalyst towards the basicity demanded by each target chemical reaction. An overview of recent and significant achievements in that perspective is presented in the second part of the paper.
Numerical modeling of overland flow due to rainfall-runoff
USDA-ARS?s Scientific Manuscript database
Runoff is a basic hydrologic process that can be influenced by management activities in agricultural watersheds. Better description of runoff patterns through modeling will help to understand and predict watershed sediment transport and water quality. Normally, runoff is studied with kinematic wave ...
Simon, Tony J
2008-01-01
In this article, I present an updated account that attempts to explain, in cognitive processing and neural terms, the nonverbal intellectual impairments experienced by most children with deletions of chromosome 22q11.2. Specifically, I propose that this genetic syndrome leads to early developmental changes in the structure and function of clearly delineated neural circuits for basic spatiotemporal cognition. This dysfunction then cascades into impairments in basic magnitude and then numerical processes, because of the central role that representations of space and time play in their construction. I propose that this takes the form of "spatiotemporal hypergranularity"; the increase in grain size and thus reduced resolution of mental representations of spatial and temporal information. The result is that spatiotemporal processes develop atypically and thereby produce the characteristic impairments in nonverbal cognitive domains that are a hallmark feature of chromosome 22q11.2 deletion syndrome. If this hypothesis driven account is supported by future research, the results will create a neurocognitive explanation of spatiotemporal and numerical impairments in the syndrome that is specific enough to be directly translated into the development of targeted therapeutic interventions.
Numerical Modeling in Geodynamics: Success, Failure and Perspective
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.
2005-12-01
A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of tuning model variables are greater than two, test carefully the effect of each of the variables on the modeled phenomenon. Remember: With four exponents I can fit an elephant (E. Fermi, physicist). (vii) Make your numerical model as accurate as possible, but never put the aim to reach a great accuracy: Undue precision of computations is the first symptom of mathematical illiteracy (N. Krylov, mathematician). How complex should be a numerical model? A model which images any detail of the reality is as useful as a map of scale 1:1 (J. Robinson, economist). This message is quite important for geoscientists, who study numerical models of complex geodynamical processes. I believe that geoscientists will never create a model of the real Earth dynamics, but we should try to model the dynamics such a way to simulate basic geophysical processes and phenomena. Does a particular model have a predictive power? Each numerical model has a predictive power, otherwise the model is useless. The predictability of the model varies with its complexity. Remember that a solution to the numerical model is an approximate solution to the equations, which have been chosen in believe that they describe dynamic processes of the Earth. Hence a numerical model predicts dynamics of the Earth as well as the mathematical equations describe this dynamics. What methodological advances are still needed for testable geodynamic modeling? Inverse (time-reverse) numerical modeling and data assimilation are new methodologies in geodynamics. The inverse modeling can allow to test geodynamic models forward in time using restored (from present-day observations) initial conditions instead of unknown conditions.
Chemical processing of glasses
NASA Astrophysics Data System (ADS)
Laine, Richard M.
1990-11-01
The development of chemical processing methods for the fabrication of glass and ceramic shapes for photonic applications is frequently Edisonian in nature. In part, this is because the numerous variables that must be optimized to obtain a given material with a specific shape and particular properties cannot be readily defined based on fundamental principles. In part, the problems arise because the basic chemistry of common chemical processing systems has not been fully delineated. The prupose of this paper is to provide an overview of the basic chemical problems associated with chemical processing. The emphasis will be on sol-gel processing, a major subset pf chemical processing. Two alternate approaches to chemical processing of glasses are also briefly discussed. One approach concerns the use of bimetallic alkoxide oligomers and polymers as potential precursors to mulimetallic glasses. The second approach describes the utility of metal carboxylate precursors to multimetallic glasses.
Conversion of Questionnaire Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less
Event-related potentials, cognition, and behavior: a biological approach.
Kotchoubey, Boris
2006-01-01
The prevailing cognitive-psychological accounts of event-related brain potentials (ERPs) assume that ERP components manifest information processing operations leading from stimulus to response. Since this view encounters numerous difficulties already analyzed in previous studies, an alternative view is presented here that regards cortical control of behavior as a repetitive sensorimotor cycle consisting of two phases: (i) feedforward anticipation and (ii) feedback cortical performance. This view allows us to interpret in an integrative manner numerous data obtained from very different domains of ERP studies: from biophysics of ERP waves to their relationship to the processing of language, in which verbal behavior is viewed as likewise controlled by the same two basic control processes: feedforward (hypothesis building) and feedback (hypothesis checking). The proposed approach is intentionally simplified, explaining numerous effects on the basis of few assumptions and relating several levels of analysis: neurophysiology, macroelectrical processes (i.e. ERPs), cognition and behavior. It can, therefore, be regarded as a first approximation to a general theory of ERPs.
The next generation of training for Arabidopsis researchers: bioinformatics and quantitative biology
USDA-ARS?s Scientific Manuscript database
It has been more than 50 years since Arabidopsis (Arabidopsis thaliana) was first introduced as a model organism to understand basic processes in plant biology. A well-organized scientific community has used this small reference plant species to make numerous fundamental plant biology discoveries (P...
Environmental Design for a Structured Network Learning Society
ERIC Educational Resources Information Center
Chang, Ben; Cheng, Nien-Heng; Deng, Yi-Chan; Chan, Tak-Wai
2007-01-01
Social interactions profoundly impact the learning processes of learners in traditional societies. The rapid rise of the Internet using population has been the establishment of numerous different styles of network communities. Network societies form when more Internet communities are established, but the basic form of a network society, especially…
Perceptual-Motor and Cognitive Performance Task-Battery for Pilot Selection
1981-01-01
processing. Basic researchers in cognitive phychology have become discouraged with the inability of numerous models to consider and account for individual...attention in the use of cues in verbal problem-solving. Journal of Personality, 197?, 40, 226-241. Mensh, I. N. Pilot selection by phychological methods
Movement and Learning: A Valuable Connection
ERIC Educational Resources Information Center
Stevens-Smith, Deborah
2004-01-01
In this article, the author discusses the relatedness between movement and learning for students. The process of learning involves basic nerve cells that transmit information and create numerous neural connections essential to learning. One way to increase learning is to encourage creation of more synaptic connections in the brain through…
Working memory deficits in developmental dyscalculia: The importance of serial order.
Attout, Lucie; Majerus, Steve
2015-01-01
Although a number of studies suggests a link between working memory (WM) storage capacity of short-term memory and calculation abilities, the nature of verbal WM deficits in children with developmental dyscalculia (DD) remains poorly understood. We explored verbal WM capacity in DD by focusing on the distinction between memory for item information (the items to be retained) and memory for order information (the order of the items within a list). We hypothesized that WM for order could be specifically related to impaired numerical abilities given that recent studies suggest close interactions between the representation of order information in WM and ordinal numerical processing. We investigated item and order WM abilities as well as basic numerical processing abilities in 16 children with DD (age: 8-11 years) and 16 typically developing children matched on age, IQ, and reading abilities. The DD group performed significantly poorer than controls in the order WM condition but not in the item WM condition. In addition, the DD group performed significantly slower than the control group on a numerical order judgment task. The present results show significantly reduced serial order WM abilities in DD coupled with less efficient numerical ordinal processing abilities, reflecting more general difficulties in explicit processing of ordinal information.
3 Lectures: "Lagrangian Models", "Numerical Transport Schemes", and "Chemical and Transport Models"
NASA Technical Reports Server (NTRS)
Douglass, A.
2005-01-01
The topics for the three lectures for the Canadian Summer School are Lagrangian Models, numerical transport schemes, and chemical and transport models. In the first lecture I will explain the basic components of the Lagrangian model (a trajectory code and a photochemical code), the difficulties in using such a model (initialization) and show some applications in interpretation of aircraft and satellite data. If time permits I will show some results concerning inverse modeling which is being used to evaluate sources of tropospheric pollutants. In the second lecture I will discuss one of the core components of any grid point model, the numerical transport scheme. I will explain the basics of shock capturing schemes, and performance criteria. I will include an example of the importance of horizontal resolution to polar processes. We have learned from NASA's global modeling initiative that horizontal resolution matters for predictions of the future evolution of the ozone hole. The numerical scheme will be evaluated using performance metrics based on satellite observations of long-lived tracers. The final lecture will discuss the evolution of chemical transport models over the last decade. Some of the problems with assimilated winds will be demonstrated, using satellite data to evaluate the simulations.
Hypervelocity impact of mm-size plastic projectile on thin aluminum plate
NASA Astrophysics Data System (ADS)
Poniaev, S. A.; Kurakin, R. O.; Sedov, A. I.; Bobashev, S. V.; Zhukov, B. G.; Nechunaev, A. F.
2017-06-01
The experimental studies of the process of hypervelocity (up to 6 km/s) impact of a mm-size projectile on a thin aluminum plate is described. The numerical simulation of this process is presented. The data on the evolution, structure, and composition of the debris cloud formed as a result of the impact are reported. Basic specific features of the debris cloud formation are revealed.
The synaptic maintenance problem: membrane recycling, Ca2+ homeostasis and late onset degeneration
2013-01-01
Most neurons are born with the potential to live for the entire lifespan of the organism. In addition, neurons are highly polarized cells with often long axons, extensively branched dendritic trees and many synaptic contacts. Longevity together with morphological complexity results in a formidable challenge to maintain synapses healthy and functional. This challenge is often evoked to explain adult-onset degeneration in numerous neurodegenerative disorders that result from otherwise divergent causes. However, comparably little is known about the basic cell biological mechanisms that keep normal synapses alive and functional in the first place. How the basic maintenance mechanisms are related to slow adult-onset degeneration in different diseasesis largely unclear. In this review we focus on two basic and interconnected cell biological mechanisms that are required for synaptic maintenance: endomembrane recycling and calcium (Ca2+) homeostasis. We propose that subtle defects in these homeostatic processes can lead to late onset synaptic degeneration. Moreover, the same basic mechanisms are hijacked, impaired or overstimulated in numerous neurodegenerative disorders. Understanding the pathogenesis of these disorders requires an understanding of both the initial cause of the disease and the on-going changes in basic maintenance mechanisms. Here we discuss the mechanisms that keep synapses functional over long periods of time with the emphasis on their role in slow adult-onset neurodegeneration. PMID:23829673
Association of Parental Health Literacy with Oral Health of Navajo Nation Preschoolers
ERIC Educational Resources Information Center
Brega, A. G.; Thomas, J. F.; Henderson, W. G.; Batliner, T. S.; Quissell, D. O.; Braun, P. A.; Wilson, A.; Bryant, L. L.; Nadeau, K. J.; Albino, J.
2016-01-01
Health literacy is "the capacity to obtain, process and understand basic health information and services needed to make appropriate health decisions". Although numerous studies show a link between health literacy and clinical outcomes, little research has examined the association of health literacy with oral health. No large-scale…
Mirroring, Mentalizing, and the Social Neuroscience of Listening
ERIC Educational Resources Information Center
Spunt, Robert P.
2013-01-01
Listening to another speak is a basic process in social cognition. In the social neurosciences, there are relatively few studies that directly bear on listening; however, numerous studies have investigated the neural bases of some of the likely constituents of successful listening. In this article, I review some of this work as it relates to…
Demonstration and Research Program for Teaching Young String Players. Final Report.
ERIC Educational Resources Information Center
Yarborough, William
This report explains a system for rapidly training beginning students in the technical aspects of playing a stringed instrument. The program also affords them a well-rounded, basic knowledge of music. A "numerical" method of notation and concentrated muscular exercises greatly speeded the technical learning process. The daily coordination of ear…
Decision Making and Ratio Processing in Patients with Mild Cognitive Impairment.
Pertl, Marie-Theres; Benke, Thomas; Zamarian, Laura; Delazer, Margarete
2015-01-01
Making advantageous decisions is important in everyday life. This study aimed at assessing how patients with mild cognitive impairment (MCI) make decisions under risk. Additionally, it investigated the relationship between decision making, ratio processing, basic numerical abilities, and executive functions. Patients with MCI (n = 22) were compared with healthy controls (n = 29) on a complex task of decision making under risk (Game of Dice Task-Double, GDT-D), on two tasks evaluating basic decision making under risk, on a task of ratio processing, and on several neuropsychological background tests. Patients performed significantly lower than controls on the GDT-D and on ratio processing, whereas groups performed comparably on basic decision tasks. Specifically, in the GDT-D, patients obtained lower net scores and lower mean expected values, which indicate a less advantageous performance relative to that of controls. Performance on the GDT-D correlated significantly with performance in basic decision tasks, ratio processing, and executive-function measures when the analysis was performed on the whole sample. Patients with MCI make sub-optimal decisions in complex risk situations, whereas they perform at the same level as healthy adults in simple decision situations. Ratio processing and executive functions have an impact on the decision-making performance of both patients and healthy older adults. In order to facilitate advantageous decisions in complex everyday situations, information should be presented in an easily comprehensible form and cognitive training programs for patients with MCI should focus--among other abilities--on executive functions and ratio processing.
NASA Astrophysics Data System (ADS)
Katsaounis, T. D.
2005-02-01
The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using Diffpack and MPI are also presented. Chapter 2 presents the overlapping domain decomposition method for solving PDEs. It is well known that these methods are suitable for parallel processing. The first part of the chapter covers the mathematical formulation of the method as well as algorithmic and implementational issues. The second part presents a serial and a parallel implementational framework within the programming environment of Diffpack. The chapter closes by showing how to solve two application examples with the overlapping domain decomposition method using Diffpack. Chapter 3 is a tutorial about how to incorporate the multigrid solver in Diffpack. The method is illustrated by examples such as a Poisson solver, a general elliptic problem with various types of boundary conditions and a nonlinear Poisson type problem. In chapter 4 the mixed finite element is introduced. Technical issues concerning the practical implementation of the method are also presented. The main difficulties of the efficient implementation of the method, especially in two and three space dimensions on unstructured grids, are presented and addressed in the framework of Diffpack. The implementational process is illustrated by two examples, namely the system formulation of the Poisson problem and the Stokes problem. Chapter 5 is closely related to chapter 4 and addresses the problem of how to solve efficiently the linear systems arising by the application of the mixed finite element method. The proposed method is block preconditioning. Efficient techniques for implementing the method within Diffpack are presented. Optimal block preconditioners are used to solve the system formulation of the Poisson problem, the Stokes problem and the bidomain model for the electrical activity in the heart. The subject of chapter 6 is systems of PDEs. Linear and nonlinear systems are discussed. Fully implicit and operator splitting methods are presented. Special attention is paid to how existing solvers for scalar equations in Diffpack can be used to derive fully implicit solvers for systems. The proposed techniques are illustrated in terms of two applications, namely a system of PDEs modelling pipeflow and a two-phase porous media flow. Stochastic PDEs is the topic of chapter 7. The first part of the chapter is a simple introduction to stochastic PDEs; basic analytical properties are presented for simple models like transport phenomena and viscous drag forces. The second part considers the numerical solution of stochastic PDEs. Two basic techniques are presented, namely Monte Carlo and perturbation methods. The last part explains how to implement and incorporate these solvers into Diffpack. Chapter 8 describes how to operate Diffpack from Python scripts. The main goal here is to provide all the programming and technical details in order to glue the programming environment of Diffpack with visualization packages through Python and in general take advantage of the Python interfaces. Chapter 9 attempts to show how to use numerical experiments to measure the performance of various PDE solvers. The authors gathered a rather impressive list, a total of 14 PDE solvers. Solvers for problems like Poisson, Navier--Stokes, elasticity, two-phase flows and methods such as finite difference, finite element, multigrid, and gradient type methods are presented. The authors provide a series of numerical results combining various solvers with various methods in order to gain insight into their computational performance and efficiency. In Chapter 10 the authors consider a computationally challenging problem, namely the computation of the electrical activity of the human heart. After a brief introduction on the biology of the problem the authors present the mathematical models involved and a numerical method for solving them within the framework of Diffpack. Chapter 11 and 12 are closely related; actually they could have been combined in a single chapter. Chapter 11 introduces several mathematical models used in finance, based on the Black--Scholes equation. Chapter 12 considers several numerical methods like Monte Carlo, lattice methods, finite difference and finite element methods. Implementation of these methods within Diffpack is presented in the last part of the chapter. Chapter 13 presents how the finite element method is used for the modelling and analysis of elastic structures. The authors describe the structural elements of Diffpack which include popular elements such as beams and plates and examples are presented on how to use them to simulate elastic structures. Chapter 14 describes an application problem, namely the extrusion of aluminum. This is a rather\\endcolumn complicated process which involves non-Newtonian flow, heat transfer and elasticity. The authors describe the systems of PDEs modelling the underlying process and use a finite element method to obtain a numerical solution. The implementation of the numerical method in Diffpack is presented along with some applications. The last chapter, chapter 15, focuses on mathematical and numerical models of systems of PDEs governing geological processes in sedimentary basins. The underlying mathematical model is solved using the finite element method within a fully implicit scheme. The authors discuss the implementational issues involved within Diffpack and they present results from several examples. In summary, the book focuses on the computational and implementational issues involved in solving partial differential equations. The potential reader should have a basic knowledge of PDEs and the finite difference and finite element methods. The examples presented are solved within the programming framework of Diffpack and the reader should have prior experience with the particular software in order to take full advantage of the book. Overall the book is well written, the subject of each chapter is well presented and can serve as a reference for graduate students, researchers and engineers who are interested in the numerical solution of partial differential equations modelling various applications.
Siemann, Julia; Petermann, Franz
2018-01-01
This review reconciles past findings on numerical processing with key assumptions of the most predominant model of arithmetic in the literature, the Triple Code Model (TCM). This is implemented by reporting diverse findings in the literature ranging from behavioral studies on basic arithmetic operations over neuroimaging studies on numerical processing to developmental studies concerned with arithmetic acquisition, with a special focus on developmental dyscalculia (DD). We evaluate whether these studies corroborate the model and discuss possible reasons for contradictory findings. A separate section is dedicated to the transfer of TCM to arithmetic development and to alternative accounts focusing on developmental questions of numerical processing. We conclude with recommendations for future directions of arithmetic research, raising questions that require answers in models of healthy as well as abnormal mathematical development. This review assesses the leading model in the field of arithmetic processing (Triple Code Model) by presenting knowledge from interdisciplinary research. It assesses the observed contradictory findings and integrates the resulting opposing viewpoints. The focus is on the development of arithmetic expertise as well as abnormal mathematical development. The original aspect of this article is that it points to a gap in research on these topics and provides possible solutions for future models. Copyright © 2017 Elsevier Ltd. All rights reserved.
How number-space relationships are assessed before formal schooling: A taxonomy proposal
Patro, Katarzyna; Nuerk, Hans-Christoph; Cress, Ulrike; Haman, Maciej
2014-01-01
The last years of research on numerical development have provided evidence that spatial-numerical associations (SNA) can be formed independent of formal school training. However, most of these studies used various experimental paradigms that referred to slightly different aspects of number and space processing. This poses a question of whether all SNAs described in the developmental literature can be interpreted as a unitary construct, or whether they are rather examples of different, but related phenomena. Our review aims to provide a starting point for a systematic classification of SNA measures used from infancy to late preschool years, and their underlying representations. We propose to distinguish among four basic SNA categories: (i) cross-dimensional magnitude processing, (ii) associations between spatial and numerical intervals, (iii) associations between cardinalities and spatial directions, (iv) associations between ordinalities and spatial directions. Such systematization allows for identifying similarities and differences between processes and representations that underlie the described measures, and also for assessing the adequacy of using different SNA tasks at different developmental stages. PMID:24860532
Rotary wave-ejector enhanced pulse detonation engine
NASA Astrophysics Data System (ADS)
Nalim, M. R.; Izzy, Z. A.; Akbari, P.
2012-01-01
The use of a non-steady ejector based on wave rotor technology is modeled for pulse detonation engine performance improvement and for compatibility with turbomachinery components in hybrid propulsion systems. The rotary wave ejector device integrates a pulse detonation process with an efficient momentum transfer process in specially shaped channels of a single wave-rotor component. In this paper, a quasi-one-dimensional numerical model is developed to help design the basic geometry and operating parameters of the device. The unsteady combustion and flow processes are simulated and compared with a baseline PDE without ejector enhancement. A preliminary performance assessment is presented for the wave ejector configuration, considering the effect of key geometric parameters, which are selected for high specific impulse. It is shown that the rotary wave ejector concept has significant potential for thrust augmentation relative to a basic pulse detonation engine.
Fuchs, Lynn S; Geary, David C; Compton, Donald L; Fuchs, Douglas; Hamlett, Carol L; Seethaler, Pamela M; Bryant, Joan D; Schatschneider, Christopher
2010-11-01
The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (N = 280; mean age = 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations, and word problems in fall and then reassessed on procedural calculations and word problems in spring. Development was indexed by latent change scores, and the interplay between numerical and domain-general abilities was analyzed by multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of procedural calculations and word problems development. Yet, for procedural calculations development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for word problems development, the set of domain-general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive.
Fuchs, Lynn S.; Geary, David C.; Compton, Donald L.; Fuchs, Douglas; Hamlett, Carol L.; Seethaler, Pamela M.; Bryant, Joan D.; Schatschneider, Christopher
2010-01-01
The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (n=280; 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations (PCs), and word problems (WPs) in fall and then reassessed on PCs and WPs in spring. Development was indexed via latent change scores, and the interplay between numerical and domain-general abilities was analyzed via multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of PC and WP development. Yet, for PC development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for WP development, the set of domain- general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive. PMID:20822213
A Model System for the Study of Gene Expression in the Undergraduate Laboratory
ERIC Educational Resources Information Center
Hargadon, Kristian M.
2016-01-01
The flow of genetic information from DNA to RNA to protein, otherwise known as the "central dogma" of biology, is one of the most basic and overarching concepts in the biological sciences. Nevertheless, numerous studies have reported student misconceptions at the undergraduate level of this fundamental process of gene expression. This…
ERIC Educational Resources Information Center
Patro, Katarzyna; Fischer, Ursula; Nuerk, Hans-Christoph; Cress, Ulrike
2016-01-01
Spatial processing of numbers has emerged as one of the basic properties of humans' mathematical thinking. However, how and when number-space relations develop is a highly contested issue. One dominant view has been that a link between numbers and left/right spatial directions is constructed based on directional experience associated with reading…
ERIC Educational Resources Information Center
Fuchs, Lynn S.; Geary, David C.; Compton, Donald L.; Fuchs, Douglas; Hamlett, Carol L.; Seethaler, Pamela M.; Bryant, Joan D.; Schatschneider, Christopher
2010-01-01
The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (N = 280; mean age = 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations, and word…
Fine Tuning Cell Migration by a Disintegrin and Metalloproteinases
Theodorou, K.
2017-01-01
Cell migration is an instrumental process involved in organ development, tissue homeostasis, and various physiological processes and also in numerous pathologies. Both basic cell migration and migration towards chemotactic stimulus consist of changes in cell polarity and cytoskeletal rearrangement, cell detachment from, invasion through, and reattachment to their neighboring cells, and numerous interactions with the extracellular matrix. The different steps of immune cell, tissue cell, or cancer cell migration are tightly coordinated in time and place by growth factors, cytokines/chemokines, adhesion molecules, and receptors for these ligands. This review describes how a disintegrin and metalloproteinases interfere with several steps of cell migration, either by proteolytic cleavage of such molecules or by functions independent of proteolytic activity. PMID:28260841
Bartelet, Dimona; Vaessen, Anniek; Blomert, Leo; Ansari, Daniel
2014-01-01
Relations between children's mathematics achievement and their basic number processing skills have been reported in both cross-sectional and longitudinal studies. Yet, some key questions are currently unresolved, including which kindergarten skills uniquely predict children's arithmetic fluency during the first year of formal schooling and the degree to which predictors are contingent on children's level of arithmetic proficiency. The current study assessed kindergarteners' non-symbolic and symbolic number processing efficiency. In addition, the contribution of children's underlying magnitude representations to differences in arithmetic achievement was assessed. Subsequently, in January of Grade 1, their arithmetic proficiency was assessed. Hierarchical regression analysis revealed that children's efficiency to compare digits, count, and estimate numerosities uniquely predicted arithmetic differences above and beyond the non-numerical factors included. Moreover, quantile regression analysis indicated that symbolic number processing efficiency was consistently a significant predictor of arithmetic achievement scores regardless of children's level of arithmetic proficiency, whereas their non-symbolic number processing efficiency was not. Finally, none of the task-specific effects indexing children's representational precision was significantly associated with arithmetic fluency. The implications of the results are 2-fold. First, the findings indicate that children's efficiency to process symbols is important for the development of their arithmetic fluency in Grade 1 above and beyond the influence of non-numerical factors. Second, the impact of children's non-symbolic number processing skills does not depend on their arithmetic achievement level given that they are selected from a nonclinical population. Copyright © 2013 Elsevier Inc. All rights reserved.
Ordinary differential equations.
Lebl, Jiří
2013-01-01
In this chapter we provide an overview of the basic theory of ordinary differential equations (ODE). We give the basics of analytical methods for their solutions and also review numerical methods. The chapter should serve as a primer for the basic application of ODEs and systems of ODEs in practice. As an example, we work out the equations arising in Michaelis-Menten kinetics and give a short introduction to using Matlab for their numerical solution.
IMAGES: An interactive image processing system
NASA Technical Reports Server (NTRS)
Jensen, J. R.
1981-01-01
The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.
Theoretical Bases of Polymer Photodegradation and Photooxidation,
1987-10-15
UNCLASSIFIED FTD- AD(RS)T-E868-87 F/G 7/6 NL MhEEOEE~~h EhiIIIEEEEEI -M Mi(ROC PY RESOLUtIO(N TEST CHA NA ION- A j ~A M R ~ . DEiC FILE CME ( 1 ) FTD-ID...In addition, when such processes are carried out in an atmosphere of air, numerous carbonyl, carboxyl , hydroxyl etc. groups form along the polymer...photoaging this should be multilateral with consideration of the following basic processes [ 1 -93: ...... 1 . Photochemical reactions of the actual
Molecular Mechanisms of Neuroplasticity: An Expanding Universe.
Gulyaeva, N V
2017-03-01
Biochemical processes in synapses and other neuronal compartments underlie neuroplasticity (functional and structural alterations in the brain enabling adaptation to the environment, learning, memory, as well as rehabilitation after brain injury). This basic molecular level of brain plasticity covers numerous specific proteins (enzymes, receptors, structural proteins, etc.) participating in many coordinated and interacting signal and metabolic processes, their modulation forming a molecular basis for brain plasticity. The articles in this issue are focused on different "hot points" in the research area of biochemical mechanisms supporting neuroplasticity.
The Units Ontology: a tool for integrating units of measurement in science
Gkoutos, Georgios V.; Schofield, Paul N.; Hoehndorf, Robert
2012-01-01
Units are basic scientific tools that render meaning to numerical data. Their standardization and formalization caters for the report, exchange, process, reproducibility and integration of quantitative measurements. Ontologies are means that facilitate the integration of data and knowledge allowing interoperability and semantic information processing between diverse biomedical resources and domains. Here, we present the Units Ontology (UO), an ontology currently being used in many scientific resources for the standardized description of units of measurements. PMID:23060432
Specialization in the Human Brain: The Case of Numbers
Kadosh, Roi Cohen; Bahrami, Bahador; Walsh, Vincent; Butterworth, Brian; Popescu, Tudor; Price, Cathy J.
2011-01-01
How numerical representation is encoded in the adult human brain is important for a basic understanding of human brain organization, its typical and atypical development, its evolutionary precursors, cognitive architectures, education, and rehabilitation. Previous studies have shown that numerical processing activates the same intraparietal regions irrespective of the presentation format (e.g., symbolic digits or non-symbolic dot arrays). This has led to claims that there is a single format-independent, numerical representation. In the current study we used a functional magnetic resonance adaptation paradigm, and effective connectivity analysis to re-examine whether numerical processing in the intraparietal sulci is dependent or independent on the format of the stimuli. We obtained two novel results. First, the whole brain analysis revealed that format change (e.g., from dots to digits), in the absence of a change in magnitude, activated the same intraparietal regions as magnitude change, but to a greater degree. Second, using dynamic causal modeling as a tool to disentangle neuronal specialization across regions that are commonly activated, we found that the connectivity between the left and right intraparietal sulci is format-dependent. Together, this line of results supports the idea that numerical representation is subserved by multiple mechanisms within the same parietal regions. PMID:21808615
CRIB; the mineral resources data bank of the U.S. Geological Survey
Calkins, James Alfred; Kays, Olaf; Keefer, Eleanor K.
1973-01-01
The recently established Computerized Resources Information Bank (CRIB) of the U.S. Geological Survey is expected to play an increasingly important role in the study of United States' mineral resources. CRIB provides a rapid means for organizing and summarizing information on mineral resources and for displaying the results. CRIB consists of a set of variable-length records containing the basic information needed to characterize one or more mineral commodities, a mineral deposit, or several related deposits. The information consists of text, numeric data, and codes. Some topics covered are: name, location, commodity information, geology, production, reserves, potential resources, and references. The data are processed by the GIPSY program, which performs all the processing tasks needed to build, operate, and maintain the CRIB file. The sophisticated retrieval program allows the user to make highly selective searches of the files for words, parts of words, phrases, numeric data, word ranges, numeric ranges, and others, and to interrelate variables by logic statements to any degree of refinement desired. Three print options are available, or the retrieved data can be passed to another program for further processing.
Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete
2014-01-01
Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.
The Effect of Normalization in Violence Video Classification Performance
NASA Astrophysics Data System (ADS)
Ali, Ashikin; Senan, Norhalina
2017-08-01
Basically, data pre-processing is an important part of data mining. Normalization is a pre-processing stage for any type of problem statement, especially in video classification. Challenging problems that arises in video classification is because of the heterogeneous content, large variations in video quality and complex semantic meanings of the concepts involved. Therefore, to regularize this problem, it is thoughtful to ensure normalization or basically involvement of thorough pre-processing stage aids the robustness of classification performance. This process is to scale all the numeric variables into certain range to make it more meaningful for further phases in available data mining techniques. Thus, this paper attempts to examine the effect of 2 normalization techniques namely Min-max normalization and Z-score in violence video classifications towards the performance of classification rate using Multi-layer perceptron (MLP) classifier. Using Min-Max Normalization range of [0,1] the result shows almost 98% of accuracy, meanwhile Min-Max Normalization range of [-1,1] accuracy is 59% and for Z-score the accuracy is 50%.
Reigosa-Crespo, Vivian; González-Alemañy, Eduardo; León, Teresa; Torres, Rosario; Mosquera, Raysil; Valdés-Sosa, Mitchell
2013-01-01
The first aim of the present study was to investigate whether numerical effects (Numerical Distance Effect, Counting Effect and Subitizing Effect) are domain-specific predictors of mathematics development at the end of elementary school by exploring whether they explain additional variance of later mathematics fluency after controlling for the effects of general cognitive skills, focused on nonnumerical aspects. The second aim was to address the same issues but applied to achievement in mathematics curriculum that requires solutions to fluency in calculation. These analyses assess whether the relationship found for fluency are generalized to mathematics content beyond fluency in calculation. As a third aim, the domain specificity of the numerical effects was examined by analyzing whether they contribute to the development of reading skills, such as decoding fluency and reading comprehension, after controlling for general cognitive skills and phonological processing. Basic numerical capacities were evaluated in children of 3rd and 4th grades (n=49). Mathematics and reading achievements were assessed in these children one year later. Results showed that the size of the Subitizing Effect was a significant domain-specific predictor of fluency in calculation and also in curricular mathematics achievement, but not in reading skills, assessed at the end of elementary school. Furthermore, the size of the Counting Effect also predicted fluency in calculation, although this association only approached significance. These findings contrast with proposals that the core numerical competencies measured by enumeration will bear little relationship to mathematics achievement. We conclude that basic numerical capacities constitute domain-specific predictors and that they are not exclusively “start-up” tools for the acquisition of Mathematics; but they continue modulating this learning at the end of elementary school. PMID:24255710
Reigosa-Crespo, Vivian; González-Alemañy, Eduardo; León, Teresa; Torres, Rosario; Mosquera, Raysil; Valdés-Sosa, Mitchell
2013-01-01
The first aim of the present study was to investigate whether numerical effects (Numerical Distance Effect, Counting Effect and Subitizing Effect) are domain-specific predictors of mathematics development at the end of elementary school by exploring whether they explain additional variance of later mathematics fluency after controlling for the effects of general cognitive skills, focused on nonnumerical aspects. The second aim was to address the same issues but applied to achievement in mathematics curriculum that requires solutions to fluency in calculation. These analyses assess whether the relationship found for fluency are generalized to mathematics content beyond fluency in calculation. As a third aim, the domain specificity of the numerical effects was examined by analyzing whether they contribute to the development of reading skills, such as decoding fluency and reading comprehension, after controlling for general cognitive skills and phonological processing. Basic numerical capacities were evaluated in children of 3(rd) and 4(th) grades (n=49). Mathematics and reading achievements were assessed in these children one year later. Results showed that the size of the Subitizing Effect was a significant domain-specific predictor of fluency in calculation and also in curricular mathematics achievement, but not in reading skills, assessed at the end of elementary school. Furthermore, the size of the Counting Effect also predicted fluency in calculation, although this association only approached significance. These findings contrast with proposals that the core numerical competencies measured by enumeration will bear little relationship to mathematics achievement. We conclude that basic numerical capacities constitute domain-specific predictors and that they are not exclusively "start-up" tools for the acquisition of Mathematics; but they continue modulating this learning at the end of elementary school.
Can responses to basic non-numerical visual features explain neural numerosity responses?
Harvey, Ben M; Dumoulin, Serge O
2017-04-01
Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.
[Symptoms diagnosis and treatment of dyscalulia].
Ise, Elena; Schulte-Körne, Gerd
2013-07-01
Children with dyscalculia show deficits in basic numerical processing which cause difficulties in the acquisition of mathematical skills. This article provides an overview of current research findings regarding the symptoms, cause, and prognosis of dyscalculia, and it summarizes recent developments in the diagnosis, early intervention, and treatment thereof. Diagnosis has improved recently because newly developed tests focus not only on the math curriculum, but also on basic skills found to be impaired in dyscalculia. A controversial debate continues with regard to IQ achievement discrepancy. International studies have demonstrated the effectiveness of specialized interventions. This article summarizes the research findings from intervention studies, describes different treatment approaches, and discusses implications for clinical practice.
LLL 8080 BASIC-II interpreter user's manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGoldrick, P.R.; Dickinson, J.; Allison, T.G.
1978-04-03
Scientists are finding increased applications for microprocessors as process controllers in their experiments. However, while microprocessors are small and inexpensive, they are difficult to program in machine or assembly language. A high-level language is needed to enable scientists to develop their own microcomputer programs for their experiments on location. Recognizing this need, LLL contracted to have such a language developed. This report describes the resulting LLL BASIC interpreter, which opeates with LLL's 8080-based MCS-8 microcomputer system. All numerical operations are done using Advanced Micro Device's Am9511 arithmetic processor chip or optionally by using a software simulation of that chip. 1more » figure.« less
NASA Astrophysics Data System (ADS)
Bateev, A. B.; Filippov, V. P.
2017-01-01
The principle possibility of using computer program Univem MS for Mössbauer spectra fitting as a demonstration material at studying such disciplines as atomic and nuclear physics and numerical methods by students is shown in the article. This program is associated with nuclear-physical parameters such as isomer (or chemical) shift of nuclear energy level, interaction of nuclear quadrupole moment with electric field and of magnetic moment with surrounded magnetic field. The basic processing algorithm in such programs is the Least Square Method. The deviation of values of experimental points on spectra from the value of theoretical dependence is defined on concrete examples. This value is characterized in numerical methods as mean square deviation. The shape of theoretical lines in the program is defined by Gaussian and Lorentzian distributions. The visualization of the studied material on atomic and nuclear physics can be improved by similar programs of the Mössbauer spectroscopy, X-ray Fluorescence Analyzer or X-ray diffraction analysis.
Reducing full one-loop amplitudes to scalar integrals at the integrand level
NASA Astrophysics Data System (ADS)
Ossola, Giovanni; Papadopoulos, Costas G.; Pittau, Roberto
2007-02-01
We show how to extract the coefficients of the 4-, 3-, 2- and 1-point one-loop scalar integrals from the full one-loop amplitude of arbitrary scattering processes. In a similar fashion, also the rational terms can be derived. Basically no information on the analytical structure of the amplitude is required, making our method appealing for an efficient numerical implementation.
Material failure modelling in metals at high strain rates
NASA Astrophysics Data System (ADS)
Panov, Vili
2005-07-01
Plate impact tests have been conducted on the OFHC Cu using single-stage gas gun. Using stress gauges, which were supported with PMMA blocks on the back of the target plates, stress-time histories have been recorded. After testing, micro structural observations of the softly recovered OFHC Cu spalled specimen were carried out and evolution of damage has been examined. To account for the physical mechanisms of failure, the concept that thermal activation in material separation during fracture processes has been adopted as basic mechanism for this material failure model development. With this basic assumption, the proposed model is compatible with the Mechanical Threshold Stress model and therefore in this development it was incorporated into the MTS material model in DYNA3D. In order to analyse proposed criterion a series of FE simulations have been performed for OFHC Cu. The numerical analysis results clearly demonstrate the ability of the model to predict the spall process and experimentally observed tensile damage and failure. It is possible to simulate high strain rate deformation processes and dynamic failure in tension for wide range of temperature. The proposed cumulative criterion, introduced in the DYNA3D code, is able to reproduce the ``pull-back'' stresses of the free surface caused by creation of the internal spalling, and enables one to analyse numerically the spalling over a wide range of impact velocities.
Numerical modelling in biosciences using delay differential equations
NASA Astrophysics Data System (ADS)
Bocharov, Gennadii A.; Rihan, Fathalla A.
2000-12-01
Our principal purposes here are (i) to consider, from the perspective of applied mathematics, models of phenomena in the biosciences that are based on delay differential equations and for which numerical approaches are a major tool in understanding their dynamics, (ii) to review the application of numerical techniques to investigate these models. We show that there are prima facie reasons for using such models: (i) they have a richer mathematical framework (compared with ordinary differential equations) for the analysis of biosystem dynamics, (ii) they display better consistency with the nature of certain biological processes and predictive results. We analyze both the qualitative and quantitative role that delays play in basic time-lag models proposed in population dynamics, epidemiology, physiology, immunology, neural networks and cell kinetics. We then indicate suitable computational techniques for the numerical treatment of mathematical problems emerging in the biosciences, comparing them with those implemented by the bio-modellers.
Computer Program For Linear Algebra
NASA Technical Reports Server (NTRS)
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
ERIC Educational Resources Information Center
Kaufmann, Liane; Handl, Pia; Thony, Brigitte
2003-01-01
In this study, six elementary grade children with developmental dyscalculia were trained individually and in small group settings with a one-semester program stressing basic numerical knowledge and conceptual knowledge. All the children showed considerable and partly significant performance increases on all calculation components. Results suggest…
Instability of a solidifying binary mixture
NASA Technical Reports Server (NTRS)
Antar, B. N.
1982-01-01
An analysis is performed on the stability of a solidifying binary mixture due to surface tension variation of the free liquid surface. The basic state solution is obtained numerically as a nonstationary function of time. Due to the time dependence of the basic state, the stability analysis is of the global type which utilizes a variational technique. Also due to the fact that the basic state is a complex function of both space and time, the stability analysis is performed through numerical means.
Comments on the Development of Computational Mathematics in Czechoslovakia and in the USSR.
1987-03-01
ACT (COusduMe an reverse .eld NE 4040604W SWi 1410011 6F 660" ambe The talk is an Invited lecture at Ale Conference on the History of Scientific and...Numeric Computations, May 13-15, 1987, Princeton, New Jersey. It present soon basic subjective observations about the history of numerical methods in...invited lecture at ACH Conference on the History of Scientific and Numeric Computations, May 13’-15, 1987, Princeton, New Jersey. It present some basic
Huang, Jian; Du, Feng-lei; Yao, Yuan; Wan, Qun; Wang, Xiao-song; Chen, Fei-yan
2015-01-01
Distance effect has been regarded as the best established marker of basic numerical magnitude processes and is related to individual mathematical abilities. A larger behavioral distance effect is suggested to be concomitant with lower mathematical achievement in children. However, the relationship between distance effect and superior mathematical abilities is unclear. One could get superior mathematical abilities by acquiring the skill of abacus-based mental calculation (AMC), which can be used to solve calculation problems with exceptional speed and high accuracy. In the current study, we explore the relationship between distance effect and superior mathematical abilities by examining whether and how the AMC training modifies numerical magnitude processing. Thus, mathematical competencies were tested in 18 abacus-trained children (who accepted the AMC training) and 18 non-trained children. Electroencephalography (EEG) waveforms were recorded when these children executed numerical comparison tasks in both Arabic digit and dot array forms. We found that: (a) the abacus-trained group had superior mathematical abilities than their peers; (b) distance effects were found both in behavioral results and on EEG waveforms; (c) the distance effect size of the average amplitude on the late negative-going component was different between groups in the digit task, with a larger effect size for abacus-trained children; (d) both the behavioral and EEG distance effects were modulated by the notation. These results revealed that the neural substrates of magnitude processing were modified by AMC training, and suggested that the mechanism of the representation of numerical magnitude for children with superior mathematical abilities was different from their peers. In addition, the results provide evidence for a view of non-abstract numerical representation. PMID:26238541
The mathematical modeling of rapid solidification processing. Ph.D. Thesis. Final Report
NASA Technical Reports Server (NTRS)
Gutierrez-Miravete, E.
1986-01-01
The detailed formulation of and the results obtained from a continuum mechanics-based mathematical model of the planar flow melt spinning (PFMS) rapid solidification system are presented and discussed. The numerical algorithm proposed is capable of computing the cooling and freezing rates as well as the fluid flow and capillary phenomena which take place inside the molten puddle formed in the PFMS process. The FORTRAN listings of some of the most useful computer programs and a collection of appendices describing the basic equations used for the modeling are included.
Freddie Fish. A Primary Environmental Study of Basic Numerals, Sets, Ordinals and Shapes.
ERIC Educational Resources Information Center
Kraynak, Ola
This teacher's guide and study guide are an environmental approach to mathematics education in the primary grades. The mathematical studies of the numerals 0-10, ordinals, number sets, and basic shapes - diamond, circle, square, rectangle, and triangle - are developed through the story of Freddie Fish and his search for clean water. The…
Visualization of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)
1995-01-01
Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.
Digital reconstruction of Young's fringes using Fresnel transformation
NASA Astrophysics Data System (ADS)
Kulenovic, Rudi; Song, Yaozu; Renninger, P.; Groll, Manfred
1997-11-01
This paper deals with the digital numerical reconstruction of Young's fringes from laser speckle photography by means of the Fresnel-transformation. The physical model of the optical reconstruction of a specklegram is a near-field Fresnel-diffraction phenomenon which can be mathematically described by the Fresnel-transformation. Therefore, the interference phenomena can be directly calculated by a microcomputer.If additional a CCD-camera is used for specklegram recording the measurement procedure and evaluation process can be completely carried out in a digital way. Compared with conventional laser speckle photography no holographic plates, no wet development process and no optical specklegram reconstruction are needed. These advantages reveal a wide future in scientific and engineering applications. The basic principle of the numerical reconstruction is described, the effects of experimental parameters of Young's fringes are analyzed and representative results are presented.
Smart algorithms and adaptive methods in computational fluid dynamics
NASA Astrophysics Data System (ADS)
Tinsley Oden, J.
1989-05-01
A review is presented of the use of smart algorithms which employ adaptive methods in processing large amounts of data in computational fluid dynamics (CFD). Smart algorithms use a rationally based set of criteria for automatic decision making in an attempt to produce optimal simulations of complex fluid dynamics problems. The information needed to make these decisions is not known beforehand and evolves in structure and form during the numerical solution of flow problems. Once the code makes a decision based on the available data, the structure of the data may change, and criteria may be reapplied in order to direct the analysis toward an acceptable end. Intelligent decisions are made by processing vast amounts of data that evolve unpredictably during the calculation. The basic components of adaptive methods and their application to complex problems of fluid dynamics are reviewed. The basic components of adaptive methods are: (1) data structures, that is what approaches are available for modifying data structures of an approximation so as to reduce errors; (2) error estimation, that is what techniques exist for estimating error evolution in a CFD calculation; and (3) solvers, what algorithms are available which can function in changing meshes. Numerical examples which demonstrate the viability of these approaches are presented.
Advanced Transportation Systems, Alternate Propulsion Subsystem Concepts
NASA Technical Reports Server (NTRS)
1997-01-01
An understanding of the basic flow of of the subject hybrid model has been gained through this series of testing. Changing injectors (axial vs. radial) and inhibiting the flow between the upstream plenum and the CP section changes the basic flow structure, as evidenced by streamline and velocity contour plots. Numerous shear layer structures were identified in the test configurations; these structures include both standing and traveling vortices which may affect combustion ion stability. Standing vortices may play a role in the heat addition process as the oxidizer enters the motor, while traveling vortices may be instability mechanisms in themselves. Finally, the flow visualization and LVD measurements give insight into determining the effects of flow induced shear layers.
NASA Astrophysics Data System (ADS)
Xue, Xiaochun; Yu, Yonggang
2017-04-01
Numerical analyses have been performed to study the influence of fast depressurization on the wake flow field of the base-bleed unit (BBU) with a secondary combustion when the base-bleed projectile is propelled out of the muzzle. Two-dimensional axisymmetric Navier-Stokes equations for a multi-component chemically reactive system is solved by Fortran program to calculate the couplings of the internal flow field and wake flow field with consideration of the combustion of the base-bleed propellant and secondary combustion effect. Based on the comparison with the experiments, the unsteady variation mechanism and secondary combustion characteristic of wake flow field under fast depressurization process is obtained numerically. The results show that in the fast depressurization process, the variation extent of the base pressure of the BBU is larger in first 0.9 ms and then decreases gradually and after 1.5 ms, it remains basically stable. The pressure and temperature of the base-bleed combustion chamber experience the decrease and pickup process. Moreover, after the pressure and temperature decrease to the lowest point, the phenomenon that the external gases are flowing back into the base-bleed combustion chamber appears. Also, with the decrease of the initial pressure, the unsteady process becomes shorter and the temperature gradient in the base-bleed combustion chamber declines under the fast depressurization process, which benefits the combustion of the base-bleed propellant.
Extracting numeric measurements and temporal coordinates from Japanese radiological reports
NASA Astrophysics Data System (ADS)
Imai, Takeshi; Onogi, Yuzo
2004-04-01
Medical records are written mainly, in natural language. The focus of this study is narrative radiological reports written in natural Japanese. These reports cannot be used for advanced retrieval, data mining, and so on, unless they are stored, using a structured format such as DICOM-SR. The goal is to structure narrative reports progressively, using natural language processing (NLP). Structure has many different levels, for example, DICOM-SR has three established levels -- basic text, enhanced and comprehensive. At the enhanced level, it is necessary to use numerical measurements and spatial & temporal coordinates. In this study, the wording used in the reports was first standardized, dictionaries were organized, and morphological analysis performed. Next, numerical measurements and temporal coordinates were extracted, and the objects to which they referred, analyzed. 10,000 CT and MR reports were separated into 82,122 sentences, and 34,269 of the 36,444 numerical descriptions were tagged. Periods, slashes, hyphens, and parentheses are ambiguously used in the description of enumerated lists, dates, image numbers, and anatomical names, as well as at the end of sentences; to resolve this ambiguity, descriptions were processed, according to the order -- date, size, unit, enumerated list, and abbreviation -- then, the tagged reports were separated into sentences.
Effective approach to spectroscopy and spectral analysis techniques using Matlab
NASA Astrophysics Data System (ADS)
Li, Xiang; Lv, Yong
2017-08-01
With the development of electronic information, computer and network, modern education technology has entered new era, which would give a great impact on teaching process. Spectroscopy and spectral analysis is an elective course for Optoelectronic Information Science and engineering. The teaching objective of this course is to master the basic concepts and principles of spectroscopy, spectral analysis and testing of basic technical means. Then, let the students learn the principle and technology of the spectrum to study the structure and state of the material and the developing process of the technology. MATLAB (matrix laboratory) is a multi-paradigm numerical computing environment and fourth-generation programming language. A proprietary programming language developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, Based on the teaching practice, this paper summarizes the new situation of applying Matlab to the teaching of spectroscopy. This would be suitable for most of the current school multimedia assisted teaching
Numerical model updating technique for structures using firefly algorithm
NASA Astrophysics Data System (ADS)
Sai Kubair, K.; Mohan, S. C.
2018-03-01
Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.
NASA Technical Reports Server (NTRS)
Wu, S. T.
1987-01-01
Theoretical and numerical modeling of solar activity and its effects on the solar atmosphere within the context of magnetohydrodynamics were examined. Specifically, the scientific objectives were concerned with the physical mechanisms for the flare energy build-up and subsequent release. In addition, transport of this energy to the corona and solar wind was also investigated. Well-posed, physically self-consistent, numerical simulation models that are based upon magnetohydrodynamics were sought. A systematic investigation of the basic processes that determine the macroscopic dynamic behavior of solar and heliospheric phenomena was conducted. A total of twenty-three articles were accepted and published in major journals. The major achievements are summarized.
Modelling and properties of a nonlinear autonomous switching system in fed-batch culture of glycerol
NASA Astrophysics Data System (ADS)
Wang, Juan; Sun, Qingying; Feng, Enmin
2012-11-01
A nonlinear autonomous switching system is proposed to describe the coupled fed-batch fermentation with the pH as the feedback parameter. We prove the non-Zeno behaviors of the switching system and some basic properties of its solution, including the existence, uniqueness, boundedness and regularity. Numerical simulation is also carried out, which reveals that the proposed system can describe the factual fermentation process properly.
NASA Astrophysics Data System (ADS)
Ludwig, Andreas; Wu, Menghuai; Kharicha, Abdellah
2015-11-01
Macrosegregations, namely compositional inhomogeneities at a scale much larger than the microstructure, are typically classified according to their metallurgical appearance. In ingot castings, they are known as `A' and `V' segregation, negative cone segregation, and positive secondary pipe segregation. There exists `inverse' segregation at casting surfaces and `centerline' segregation in continuously cast slabs and blooms. Macrosegregation forms if a relative motion between the solute-enriched or -depleted melt and dendritic solid structures occurs. It is known that there are four basic mechanisms for the occurrence of macrosegregation. In the recent years, the numerical description of the combination of these mechanisms has become possible and so a tool has emerged which can be effectively used to get a deeper understanding into the process details which are responsible for the formation of the above-mentioned different macrosegregation appearances. Based on the most sophisticated numerical models, we consequently associate the four basic formation mechanisms with the physical phenomena happening during (i) DC-casting of copper-based alloys, (ii) DC-casting of aluminum-based alloys, (iii) continuous casting of steel, and (iv) ingot casting of steel.
Basic mathematical rules are encoded by primate prefrontal cortex neurons
Bongard, Sylvia; Nieder, Andreas
2010-01-01
Mathematics is based on highly abstract principles, or rules, of how to structure, process, and evaluate numerical information. If and how mathematical rules can be represented by single neurons, however, has remained elusive. We therefore recorded the activity of individual prefrontal cortex (PFC) neurons in rhesus monkeys required to switch flexibly between “greater than” and “less than” rules. The monkeys performed this task with different numerical quantities and generalized to set sizes that had not been presented previously, indicating that they had learned an abstract mathematical principle. The most prevalent activity recorded from randomly selected PFC neurons reflected the mathematical rules; purely sensory- and memory-related activity was almost absent. These data show that single PFC neurons have the capacity to represent flexible operations on most abstract numerical quantities. Our findings support PFC network models implementing specific “rule-coding” units that control the flow of information between segregated input, memory, and output layers. We speculate that these neuronal circuits in the monkey lateral PFC could readily have been adopted in the course of primate evolution for syntactic processing of numbers in formalized mathematical systems. PMID:20133872
Simanowski, Stefanie; Krajewski, Kristin
2017-08-10
This study assessed the extent to which executive functions (EF), according to their factor structure in 5-year-olds (N = 244), influenced early quantity-number competencies, arithmetic fluency, and mathematics school achievement throughout first and second grades. A confirmatory factor analysis resulted in updating as a first, and inhibition and shifting as a combined second factor. In the structural equation model, updating significantly affected knowledge of the number word sequence, suggesting a facilitatory effect on basic encoding processes in numerical materials that can be learnt purely by rote. Shifting and inhibition significantly influenced quantity to number word linkages, indicating that these processes promote developing a profound understanding of numbers. These results show the supportive role of specific EF for specific aspects of a numerical foundation. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.
Analysis and numerical simulation research of the heating process in the oven
NASA Astrophysics Data System (ADS)
Chen, Yawei; Lei, Dingyou
2016-10-01
How to use the oven to bake delicious food is the most concerned problem of the designers and users of the oven. For this intent, this paper analyzed the heat distribution in the oven based on the basic operation principles and proceeded the data simulation of the temperature distribution on the rack section. Constructing the differential equation model of the temperature distribution changes in the pan when the oven works based on the heat radiation and heat transmission, based on the idea of utilizing cellular automation to simulate heat transfer process, used ANSYS software to proceed the numerical simulation analysis to the rectangular, round-cornered rectangular, elliptical and circular pans and giving out the instantaneous temperature distribution of the corresponding shapes of the pans. The temperature distribution of the rectangular and circular pans proves that the product gets overcooked easily at the corners and edges of rectangular pans but not of a round pan.
Recent advances in basic and clinical nanomedicine.
Morrow, K John; Bawa, Raj; Wei, Chiming
2007-09-01
Nanomedicine is a global business enterprise. Industry and governments clearly are beginning to envision nanomedicine's enormous potential. A clear definition of nanotechnology is an issue that requires urgent attention. This problem exists because nanotechnology represents a cluster of technologies, each of which may have different characteristics and applications. Although numerous novel nanomedicine-related applications are under development or nearing commercialization, the process of converting basic research in nanomedicine into commercially viable products will be long and difficult. Although realization of the full potential of nanomedicine may be years or decades away, recent advances in nanotechnology-related drug delivery, diagnosis, and drug development are beginning to change the landscape of medicine. Site-specific targeted drug delivery and personalized medicine are just a few concepts that are on the horizon.
Factors related to the implementation and diffusion of new technologies: a pilot study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-06-01
In order to develop an understanding of how government intervention affects the processes of implementation and diffusion of new technologies, case studies of 14 technologies were carried out: automobiles; broadcast radio; frozen foods; black and white TV; color TV; polio vaccine; supersonic transport; fluoridation of water supplies; computer-aided instruction; basic oxygen process for steel; numerical control in manufacturing; digital computers; lasers; and integrated circuit. The key factors, their motivations for implementing/adopting the technology (or not doing so), the interactions among the key factors, and how these affected implementation/adoption are examined.
A software tool for modeling and simulation of numerical P systems.
Buiu, Catalin; Arsene, Octavian; Cipu, Corina; Patrascu, Monica
2011-03-01
A P system represents a distributed and parallel bio-inspired computing model in which basic data structures are multi-sets or strings. Numerical P systems have been recently introduced and they use numerical variables and local programs (or evolution rules), usually in a deterministic way. They may find interesting applications in areas such as computational biology, process control or robotics. The first simulator of numerical P systems (SNUPS) has been designed, implemented and made available to the scientific community by the authors of this paper. SNUPS allows a wide range of applications, from modeling and simulation of ordinary differential equations, to the use of membrane systems as computational blocks of cognitive architectures, and as controllers for autonomous mobile robots. This paper describes the functioning of a numerical P system and presents an overview of SNUPS capabilities together with an illustrative example. SNUPS is freely available to researchers as a standalone application and may be downloaded from a dedicated website, http://snups.ics.pub.ro/, which includes an user manual and sample membrane structures. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wuttke, Manfred W.
2017-04-01
At LIAG, we use numerical models to develop and enhance understanding of coupled transport processes and to predict the dynamics of the system under consideration. Topics include geothermal heat utilization, subrosion processes, and spontaneous underground coal fires. Although the details make it inconvenient if not impossible to apply a single code implementation to all systems, their investigations go along similar paths: They all depend on the solution of coupled transport equations. We thus saw a need for a modular code system with open access for the various communities to maximize the shared synergistic effects. To this purpose we develop the oops! ( open object-oriented parallel solutions) - toolkit, a C++ class library for the numerical solution of mathematical models of coupled thermal, hydraulic and chemical processes. This is used to develop problem-specific libraries like acme( amendable coal-fire modeling exercise), a class library for the numerical simulation of coal-fires and applications like kobra (Kohlebrand, german for coal-fire), a numerical simulation code for standard coal-fire models. Basic principle of the oops!-code system is the provision of data types for the description of space and time dependent data fields, description of terms of partial differential equations (pde), their discretisation and solving methods. Coupling of different processes, described by their particular pde is modeled by an automatic timescale-ordered operator-splitting technique. acme is a derived coal-fire specific application library, depending on oops!. If specific functionalities of general interest are implemented and have been tested they will be assimilated into the main oops!-library. Interfaces to external pre- and post-processing tools are easily implemented. Thus a construction kit which can be arbitrarily amended is formed. With the kobra-application constructed with acme we study the processes and propagation of shallow coal seam fires in particular in Xinjiang, China, as well as analyze and interpret results from lab experiments.
McCaffery, Kirsten J; Dixon, Ann; Hayen, Andrew; Jansen, Jesse; Smith, Sian; Simpson, Judy M
2012-01-01
To test optimal graphic risk communication formats for presenting small probabilities using graphics with a denominator of 1000 to adults with lower education and literacy. A randomized experimental study, which took place in adult basic education classes in Sydney, Australia. The participants were 120 adults with lower education and literacy. An experimental computer-based manipulation compared 1) pictographs in 2 forms, shaded "blocks" and unshaded "dots"; and 2) bar charts across different orientations (horizontal/vertical) and numerator size (small <100, medium 100-499, large 500-999). Accuracy (size of error) and ease of processing (reaction time) were assessed on a gist task (estimating the larger chance of survival) and a verbatim task (estimating the size of difference). Preferences for different graph types were also assessed. Accuracy on the gist task was very high across all conditions (>95%) and not tested further. For the verbatim task, optimal graph type depended on the numerator size. For small numerators, pictographs resulted in fewer errors than bar charts (blocks: odds ratio [OR] = 0.047, 95% confidence interval [CI] = 0.023-0.098; dots: OR = 0.049, 95% CI = 0.024-0.099). For medium and large numerators, bar charts were more accurate (e.g., medium dots: OR = 4.29, 95% CI = 2.9-6.35). Pictographs were generally processed faster for small numerators (e.g., blocks: 14.9 seconds v. bars: 16.2 seconds) and bar charts for medium or large numerators (e.g., large blocks: 41.6 seconds v. 26.7 seconds). Vertical formats were processed slightly faster than horizontal graphs with no difference in accuracy. Most participants preferred bar charts (64%); however, there was no relationship with performance. For adults with low education and literacy, pictographs are likely to be the best format to use when displaying small numerators (<100/1000) and bar charts for larger numerators (>100/1000).
48 CFR 204.7003 - Basic PII number.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Basic PII number. 204.7003... OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Uniform Procurement Instrument Identification Numbers 204.7003 Basic PII number. (a) Elements of a number. The number consists of 13 alpha-numeric characters...
48 CFR 204.7003 - Basic PII number.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Basic PII number. 204.7003... OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Uniform Procurement Instrument Identification Numbers 204.7003 Basic PII number. (a) Elements of a number. The number consists of 13 alpha-numeric characters...
The Emergence of Contextual Social Psychology.
Pettigrew, Thomas F
2018-07-01
Social psychology experiences recurring so-called "crises." This article maintains that these episodes actually mark advances in the discipline; these "crises" have enhanced relevance and led to greater methodological and statistical sophistication. New statistical tools have allowed social psychologists to begin to achieve a major goal: placing psychological phenomena in their larger social contexts. This growing trend is illustrated with numerous recent studies; they demonstrate how cultures and social norms moderate basic psychological processes. Contextual social psychology is finally emerging.
Muley, Pranjali D; Boldor, Dorin
2012-01-01
Use of advanced microwave technology for biodiesel production from vegetable oil is a relatively new technology. Microwave dielectric heating increases the process efficiency and reduces reaction time. Microwave heating depends on various factors such as material properties (dielectric and thermo-physical), frequency of operation and system design. Although lab scale results are promising, it is important to study these parameters and optimize the process before scaling up. Numerical modeling approach can be applied for predicting heating and temperature profiles including at larger scale. The process can be studied for optimization without actually performing the experiments, reducing the amount of experimental work required. A basic numerical model of continuous electromagnetic heating of biodiesel precursors was developed. A finite element model was built using COMSOL Multiphysics 4.2 software by coupling the electromagnetic problem with the fluid flow and heat transfer problem. Chemical reaction was not taken into account. Material dielectric properties were obtained experimentally, while the thermal properties were obtained from the literature (all the properties were temperature dependent). The model was tested for the two different power levels 4000 W and 4700 W at a constant flow rate of 840ml/min. The electric field, electromagnetic power density flow and temperature profiles were studied. Resulting temperature profiles were validated by comparing to the temperatures obtained at specific locations from the experiment. The results obtained were in good agreement with the experimental data.
Numerical study on non-locally reacting behavior of nacelle liners incorporating drainage slots
NASA Astrophysics Data System (ADS)
Chen, Chao; Li, Xiaodong; Thiele, Frank
2018-06-01
For acoustic liners used in current commercial nacelles, in order to prevent any liquid accumulating in the resonators, drainage slots are incorporated on the partition walls between closely packed cavities. Recently, an experimental study conducted by Busse-Gerstengarbe et al. shown that the cell interaction introduced by drainage slots causes an additional dissipation peak which increases with the size of the slot. However, the variation of damping process due to drainage slots is still not fully understood. Therefore, a numerical study based on computational aeroacoustic methods is carried out to investigate the mechanism of the changed attenuation characteristics due to drainage slots in presence of grazing incident sound waves with low or high intensities. Different slot configurations are designed based on the generic non-locally reacting liner model adopted in the experimental investigation. Both 2-D and 3-D numerical simulations of only slit resonators are carried out. Numerical results indicate that the extra peak is a result of a resonance excited in the second cavity at specific frequency. Under high sound pressure level incoming waves, the basic characteristics of the acoustic performance remain. However, vortex shedding transpires at the resonances around both the slits and the drainage slot. Vorticity contours show that the connection of two coupled cavities decreases the strength of vortex shedding around the basic Helmholtz resonance due to a higher energy reflection. Meanwhile, the cell interaction significantly increases the vorticity magnitude near the extra resonant frequency. Finally, a semi-empirical model is derived to predict the extra attenuation peak frequency.
NASA Astrophysics Data System (ADS)
Kagami, Hiroyuki
2007-01-01
We have proposed and modified the dynamical model of drying process of polymer solution coated on a flat substrate for flat polymer film fabrication and have presented the fruits through some meetings and so on. Though basic equations of the dynamical model have characteristic nonlinearity, character of the nonlinearity has not been studied enough yet. In this paper, at first, we derive nonlinear equations from the dynamical model of drying process of polymer solution. Then we introduce results of numerical simulations of the nonlinear equations and consider roles of various parameters. Some of them are indirectly concerned in strength of non-equilibriumity. Through this study, we approach essential qualities of nonlinearity in non-equilibrium process of drying process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dang, Liem X.; Schenter, Gregory K.
To enhance our understanding of the solvent exchange mechanism in liquid methanol, we report a systematic study of this process using molecular dynamics simulations. We use transition state theory, the Impey-Madden-McDonald method, the reactive flux method, and Grote-Hynes theory to compute the rate constants for this process. Solvent coupling was found to dominate, resulting in a significantly small transmission coefficient. We predict a positive activation volume for the methanol exchange process. The essential features of the dynamics of the system as well as the pressure dependence are recovered from a Generalized Langevin Equation description of the dynamics. We find thatmore » the dynamics and response to anharmonicity can be decomposed into two time regimes, one corresponding to short time response (< 0.1 ps) and long time response (> 5 ps). An effective characterization of the process results from launching dynamics from the planar hypersurface corresponding to Grote-Hynes theory. This results in improved numerical convergence of correlation functions. This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. The calculations were carried out using computer resources provided by the Office of Basic Energy Sciences.« less
Division of energy biosciences: Annual report and summaries of FY 1995 activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-04-01
The mission of the Division of Energy Biosciences is to support research that advances the fundamental knowledge necessary for the future development of biotechnologies related to the Department of Energy`s mission. The departmental civilian objectives include effective and efficient energy production, energy conservation, environmental restoration, and waste management. The Energy Biosciences program emphasizes research in the microbiological and plant sciences, as these understudied areas offer numerous scientific opportunities to dramatically influence environmentally sensible energy production and conservation. The research supported is focused on the basic mechanisms affecting plant productivity, conversion of biomass and other organic materials into fuels and chemicalsmore » by microbial systems, and the ability of biological systems to replace energy-intensive or pollutant-producing processes. The Division also addresses the increasing number of new opportunities arising at the interface of biology with other basic energy-related sciences such as biosynthesis of novel materials and the influence of soil organisms on geological processes.« less
Positioning your business in the marketplace.
Lachman, V D
1996-01-01
Marketing the quality, cost-effective service delivered by advanced practice nurses (APNs) requires savvy in marketing principles. The basic principles of market segmentation: target (niche) marketing; and the four Ps of marketing mix--product, price, promotion, and place. The marketing process is presented along with examples. APNs' ability to successfully market their skills requires that they "position" themselves in the prospective buyer's mind. After a brief description of the customer's mind-set, the focus shifts specifically to promotion--marketing in action. Numerous no-cost/low-cost ideas are included.
2012-01-01
numerical oil spill model validation showing the need for improvedmodel param- eterizations of basic oil spill processes (Cheng et al., 2010). 3.1.2...2004). Modelling the bidirectional reflectance distribution function ( BRDF ) of seawater polluted by an oil film. Optics Express, 12, 1671–1676. Pilon...display a currently valid OMB control number. 1. REPORT DATE 2012 2. REPORT TYPE 3. DATES COVERED 00-00-2012 to 00-00-2012 4. TITLE AND
1984-04-01
compounds time, understanding and coordination problems. Just too many people in the process. In fact, there are numerous versions of a task with the...sometimes -his caused interruptions. Nhis was further compounded by the fact that the cnalyss * voas toarted ar-d +hen t~opped, when the first cnaiyst...productive. Discrepancies - The major discrepancy was ’he use of Anti-Seize Compound . It is applied to components as a light, thin coat to prevent i..re, any
ICASE semiannual report, April 1 - September 30, 1989
NASA Technical Reports Server (NTRS)
1990-01-01
The Institute conducts unclassified basic research in applied mathematics, numerical analysis, and computer science in order to extend and improve problem-solving capabilities in science and engineering, particularly in aeronautics and space. The major categories of the current Institute for Computer Applications in Science and Engineering (ICASE) research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification problems, with emphasis on effective numerical methods; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers. ICASE reports are considered to be primarily preprints of manuscripts that have been submitted to appropriate research journals or that are to appear in conference proceedings.
A Survey of Terrestrial Approaches to the Challenge of Lunar Dust Containment
NASA Technical Reports Server (NTRS)
Aguilera, Tatiana; Perry, Jay L.
2009-01-01
Numerous technical challenges exist to successfully extend lunar surface exploration beyond the tantalizing first steps of Apollo. Among these is the challenge of lunar dust intrusion into the cabin environment. Addressing this challenge includes the design of barriers to intrusion as well as techniques for removing the dust from the cabin atmosphere. Opportunities exist for adapting approaches employed in dusty industrial operations and pristine manufacturing environments to cabin environmental quality maintenance applications. A survey of process technologies employed by the semiconductor, pharmaceutical, food processing, and mining industries offers insight into basic approaches that may be suitable for adaptation to lunar surface exploration applications.
Study of connectivity in student teams by observation of their learning processes
NASA Astrophysics Data System (ADS)
Pacheco, Patricio H.; Correa, Rafael D.
2016-05-01
A registration procedure based data tracking classroom activities students formed into teams, which are immersed in basic learning processes, particularly physical sciences is presented. For the analysis of the data various mathematical tools to deliver results in numerical indicators linking their learning, performance, quality of relational nexus to transformation their emotions. The range of variables under observation and further study, which is influenced by the evolution of the emotions of the different teams of students, it also covers the traditional approach to information delivery from outside (teaching in lecture) or from inside each team (abilities of pupils) to instructional materials that enhance learning inquiry and persuasion.
Classical nucleation theory in the phase-field crystal model
NASA Astrophysics Data System (ADS)
Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas
2018-04-01
A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.
Classical nucleation theory in the phase-field crystal model.
Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas
2018-04-01
A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.
Geary, David C.; Hoard, Mary K.; Nugent, Lara; Rouder, Jeffrey N.
2015-01-01
The relation between performance on measures of algebraic cognition and acuity of the approximate number system (ANS) and memory for addition facts was assessed for 171 (92 girls) 9th graders, controlling parental education, sex, reading achievement, speed of numeral processing, fluency of symbolic number processing, intelligence, and the central executive component of working memory. The algebraic tasks assessed accuracy in placing x,y pairs in the coordinate plane, speed and accuracy of expression evaluation, and schema memory for algebra equations. ANS acuity was related to accuracy of placements in the coordinate plane and expression evaluation, but not schema memory. Frequency of fact-retrieval errors was related to schema memory but not coordinate plane or expression evaluation accuracy. The results suggest the ANS may contribute to or is influenced by spatial-numerical and numerical only quantity judgments in algebraic contexts, whereas difficulties in committing addition facts to long-term memory may presage slow formation of memories for the basic structure of algebra equations. More generally, the results suggest different brain and cognitive systems are engaged during the learning of different components of algebraic competence, controlling demographic and domain general abilities. PMID:26255604
Modeling of single film bubble and numerical study of the plateau structure in foam system
NASA Astrophysics Data System (ADS)
Sun, Zhong-guo; Ni, Ni; Sun, Yi-jie; Xi, Guang
2018-02-01
The single-film bubble has a special geometry with a certain amount of gas shrouded by a thin layer of liquid film under the surface tension force both on the inside and outside surfaces of the bubble. Based on the mesh-less moving particle semi-implicit (MPS) method, a single-film double-gas-liquid-interface surface tension (SDST) model is established for the single-film bubble, which characteristically has totally two gas-liquid interfaces on both sides of the film. Within this framework, the conventional surface free energy surface tension model is improved by using a higher order potential energy equation between particles, and the modification results in higher accuracy and better symmetry properties. The complex interface movement in the oscillation process of the single-film bubble is numerically captured, as well as typical flow phenomena and deformation characteristics of the liquid film. In addition, the basic behaviors of the coalescence and connection process between two and even three single-film bubbles are studied, and the cases with bubbles of different sizes are also included. Furthermore, the classic plateau structure in the foam system is reproduced and numerically proved to be in the steady state for multi-bubble connections.
Spline approximation, Part 1: Basic methodology
NASA Astrophysics Data System (ADS)
Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar
2018-04-01
In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.
Hydroforming Of Patchwork Blanks — Numerical Modeling And Experimental Validation
NASA Astrophysics Data System (ADS)
Lamprecht, Klaus; Merklein, Marion; Geiger, Manfred
2005-08-01
In comparison to the commonly applied technology of tailored blanks the concept of patchwork blanks offers a number of additional advantages. Potential application areas for patchwork blanks in automotive industry are e.g. local reinforcements of automotive closures, structural reinforcements of rails and pillars as well as shock towers. But even if there is a significant application potential for patchwork blanks in automobile production, industrial realization of this innovative technique is decelerated due to a lack of knowledge regarding the forming behavior and the numerical modeling of patchwork blanks. Especially for the numerical simulation of hydroforming processes, where one part of the forming tool is replaced by a fluid under pressure, advanced modeling techniques are required to ensure an accurate prediction of the blanks' forming behavior. The objective of this contribution is to provide an appropriate model for the numerical simulation of patchwork blanks' forming processes. Therefore, different finite element modeling techniques for patchwork blanks are presented. In addition to basic shell element models a combined finite element model consisting of shell and solid elements is defined. Special emphasis is placed on the modeling of the weld seam. For this purpose the local mechanical properties of the weld metal, which have been determined by means of Martens-hardness measurements and uniaxial tensile tests, are integrated in the finite element models. The results obtained from the numerical simulations are compared to experimental data from a hydraulic bulge test. In this context the focus is laid on laser- and spot-welded patchwork blanks.
The modeling of MMI structures for signal processing applications
NASA Astrophysics Data System (ADS)
Le, Thanh Trung; Cahill, Laurence W.
2008-02-01
Microring resonators are promising candidates for photonic signal processing applications. However, almost all resonators that have been reported so far use directional couplers or 2×2 multimode interference (MMI) couplers as the coupling element between the ring and the bus waveguides. In this paper, instead of using 2×2 couplers, novel structures for microring resonators based on 3×3 MMI couplers are proposed. The characteristics of the device are derived using the modal propagation method. The device parameters are optimized by using numerical methods. Optical switches and filters using Silicon on Insulator (SOI) then have been designed and analyzed. This device can become a new basic component for further applications in optical signal processing. The paper concludes with some further examples of photonic signal processing circuits based on MMI couplers.
Summary of research in applied mathematics, numerical analysis, and computer sciences
NASA Technical Reports Server (NTRS)
1986-01-01
The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.
Numerical Characterization of Piezoceramics Using Resonance Curves
Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar
2016-01-01
Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods. PMID:28787875
Numerical Characterization of Piezoceramics Using Resonance Curves.
Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar
2016-01-27
Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods.
Py4CAtS - Python tools for line-by-line modelling of infrared atmospheric radiative transfer
NASA Astrophysics Data System (ADS)
Schreier, Franz; García, Sebastián Gimeno
2013-05-01
Py4CAtS — Python scripts for Computational ATmospheric Spectroscopy is a Python re-implementation of the Fortran infrared radiative transfer code GARLIC, where compute-intensive code sections utilize the Numeric/Scientific Python modules for highly optimized array-processing. The individual steps of an infrared or microwave radiative transfer computation are implemented in separate scripts to extract lines of relevant molecules in the spectral range of interest, to compute line-by-line cross sections for given pressure(s) and temperature(s), to combine cross sections to absorption coefficients and optical depths, and to integrate along the line-of-sight to transmission and radiance/intensity. The basic design of the package, numerical and computational aspects relevant for optimization, and a sketch of the typical workflow are presented.
Hiniker, Alexis
2016-01-01
Despite reports of mathematical talent in autism spectrum disorders (ASD), little is known about basic number processing abilities in affected children. We investigated number sense, the ability to rapidly assess quantity information, in 36 children with ASD and 61 typically developing controls. Numerical acuity was assessed using symbolic (Arabic numerals) as well as non-symbolic (dot array) formats. We found significant impairments in non-symbolic acuity in children with ASD, but symbolic acuity was intact. Symbolic acuity mediated the relationship between non-symbolic acuity and mathematical abilities only in children with ASD, indicating a distinctive role for symbolic number sense in the acquisition of mathematical proficiency in this group. Our findings suggest that symbolic systems may help children with ASD organize imprecise information. PMID:26659551
Modelling and simulation of cure in pultrusion processes
NASA Astrophysics Data System (ADS)
Tucci, F.; Rubino, F.; Paradiso, V.; Carlone, P.; Valente, R.
2017-10-01
Trial and error approach is not a suitable method to optimize the pultrusion process because of the high times required for the start up and the wide range of possible combinations of matrix and reinforcement. On the other hand, numerical approaches can be a suitable solution to test different parameter configuration. One of the main tasks in pultrusion processes is to obtain a complete and homogeneous resin polymerization. The formation of cross-links between polymeric chains is thermally induced but it leads to a strong exothermic heat generation, hence the thermal and the chemical phenomena are mutually affected. It requires that the two problems have to be modelled in coupled way. The mathematical model used in this work considers the composite as a lumped material, whose thermal and mechanical properties are evaluated as function of resin and fibers properties. The numerical pattern is based on a quasi-static approach in a three-dimensional Eulerian domain, which describes both thermal and chemical phenomena. The data obtained are used in a simplified C.H.I.L.E. (Cure Hardening Instantaneous Linear Elastic) model to compute the mechanical properties of the resin fraction in the pultruded. The two combined approaches allow to formulate a numerical model which takes into account the normal (no-penetration) and tangential (viscosity/friction) interactions between die and profile, the pulling force and the hydrostatic pressure of the liquid resin to evaluate the stress and strain fields induced by the process within the pultruded. The implementation of the numerical models has been carried out using the ABAQUS finite element suite, by means of several user subroutines (in Fortran language) which improve the basic software potentialities.
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Network-level reproduction number and extinction threshold for vector-borne diseases.
Xue, Ling; Scoglio, Caterina
2015-06-01
The basic reproduction number of deterministic models is an essential quantity to predict whether an epidemic will spread or not. Thresholds for disease extinction contribute crucial knowledge of disease control, elimination, and mitigation of infectious diseases. Relationships between basic reproduction numbers of two deterministic network-based ordinary differential equation vector-host models, and extinction thresholds of corresponding stochastic continuous-time Markov chain models are derived under some assumptions. Numerical simulation results for malaria and Rift Valley fever transmission on heterogeneous networks are in agreement with analytical results without any assumptions, reinforcing that the relationships may always exist and proposing a mathematical problem for proving existence of the relationships in general. Moreover, numerical simulations show that the basic reproduction number does not monotonically increase or decrease with the extinction threshold. Consistent trends of extinction probability observed through numerical simulations provide novel insights into mitigation strategies to increase the disease extinction probability. Research findings may improve understandings of thresholds for disease persistence in order to control vector-borne diseases.
Interpolation on the manifold of K component GMMs.
Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas
2015-12-01
Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.
RCHILD - an R-package for flexible use of the landscape evolution model CHILD
NASA Astrophysics Data System (ADS)
Dietze, Michael
2014-05-01
Landscape evolution models provide powerful approaches to numerically assess earth surface processes, to quantify rates of landscape change, infer sediment transfer rates, estimate sediment budgets, investigate the consequences of changes in external drivers on a geomorphic system, to provide spatio-temporal interpolations between known landscape states or to test conceptual hypotheses. CHILD (Channel-Hillslope Integrated Landscape Development Model) is one of the most-used models of landscape change in the context of at least tectonic and geomorphologic process interactions. Running CHILD from command line and working with the model output can be a rather awkward task (static model control via text input file, only numeric output in text files). The package RCHILD is a collection of functions for the free statistical software R that help using CHILD in a flexible, dynamic and user-friendly way. The comprised functions allow creating maps, real-time scenes, animations and further thematic plots from model output. The model input files can be modified dynamically and, hence, (feedback-related) changes in external factors can be implemented iteratively. Output files can be written to common formats that can be readily imported to standard GIS software. This contribution presents the basic functionality of the model CHILD as visualised and modified by the package. A rough overview of the available functions is given. Application examples help to illustrate the great potential of numeric modelling of geomorphologic processes.
Association between basic numerical abilities and mathematics achievement.
Sasanguie, Delphine; De Smedt, Bert; Defever, Emmy; Reynvoet, Bert
2012-06-01
Various measures have been used to investigate number processing in children, including a number comparison or a number line estimation task. The present study aimed to examine whether and to which extent these different measures of number representation are related to performance on a curriculum-based standardized mathematics achievement test in kindergarteners, first, second, and sixth graders. Children completed a number comparison task and a number line estimation task with a balanced set of symbolic (Arabic digits) and non-symbolic (dot patterns) stimuli. Associations with mathematics achievement were observed for the symbolic measures. Although the association with number line estimation was consistent over grades, the association with number comparison was much stronger in kindergarten compared to the other grades. The current data indicate that a good knowledge of the numerical meaning of Arabic digits is important for children's mathematical development and that particularly the access to the numerical meaning of symbolic digits rather than the representation of number per se is important. © 2011 The British Psychological Society.
Basic mechanisms in the laser control of non-Markovian dynamics
NASA Astrophysics Data System (ADS)
Puthumpally-Joseph, R.; Mangaud, E.; Chevet, V.; Desouter-Lecomte, M.; Sugny, D.; Atabek, O.
2018-03-01
Referring to a Fano-type model qualitative analogy we develop a comprehensive basic mechanism for the laser control of the non-Markovian bath response and fully implement it in a realistic control scheme, in strongly coupled open quantum systems. Converged hierarchical equations of motion are worked out to numerically solve the master equation of a spin-boson Hamiltonian to reach the reduced electronic density matrix of a heterojunction in the presence of strong terahertz laser pulses. Robust and efficient control is achieved increasing by a factor of 2 the non-Markovianity measured by the time evolution of the volume of accessible states. The consequences of such fields on the central system populations and coherence are examined, putting the emphasis on the relation between the increase of non-Markovianity and the slowing down of decoherence processes.
Mutaf Yıldız, Belde; Sasanguie, Delphine; De Smedt, Bert; Reynvoet, Bert
2018-01-01
Home numeracy has been shown to play an important role in children's mathematical performance. However, findings are inconsistent as to which home numeracy activities are related to which mathematical skills. The present study disentangled between various mathematical abilities that were previously masked by the use of composite scores of mathematical achievement. Our aim was to shed light on the specific associations between home numeracy and various mathematical abilities. The relationships between kindergartners' home numeracy activities, their basic number processing and calculation skills were investigated. Participants were 128 kindergartners ( M age = 5.43 years, SD = 0.29, range: 4.88-6.02 years) and their parents. The children completed non-symbolic and symbolic comparison tasks, non-symbolic and symbolic number line estimation tasks, mapping tasks (enumeration and connecting), and two calculation tasks. Their parents completed a home numeracy questionnaire. Results indicated small but significant associations between formal home numeracy activities that involved more explicit teaching efforts (i.e., identifying numerals, counting) and children's enumeration skills. There was no correlation between formal home numeracy activities and non-symbolic number processing. Informal home numeracy activities that involved more implicit teaching attempts , such as "playing games" and "using numbers in daily life," were (weakly) correlated with calculation and symbolic number line estimation, respectively. The present findings suggest that disentangling between various basic number processing and calculation skills in children might unravel specific relations with both formal and informal home numeracy activities. This might explain earlier reported contradictory findings on the association between home numeracy and mathematical abilities.
Mutaf Yıldız, Belde; Sasanguie, Delphine; De Smedt, Bert; Reynvoet, Bert
2018-01-01
Home numeracy has been shown to play an important role in children’s mathematical performance. However, findings are inconsistent as to which home numeracy activities are related to which mathematical skills. The present study disentangled between various mathematical abilities that were previously masked by the use of composite scores of mathematical achievement. Our aim was to shed light on the specific associations between home numeracy and various mathematical abilities. The relationships between kindergartners’ home numeracy activities, their basic number processing and calculation skills were investigated. Participants were 128 kindergartners (Mage = 5.43 years, SD = 0.29, range: 4.88–6.02 years) and their parents. The children completed non-symbolic and symbolic comparison tasks, non-symbolic and symbolic number line estimation tasks, mapping tasks (enumeration and connecting), and two calculation tasks. Their parents completed a home numeracy questionnaire. Results indicated small but significant associations between formal home numeracy activities that involved more explicit teaching efforts (i.e., identifying numerals, counting) and children’s enumeration skills. There was no correlation between formal home numeracy activities and non-symbolic number processing. Informal home numeracy activities that involved more implicit teaching attempts, such as “playing games” and “using numbers in daily life,” were (weakly) correlated with calculation and symbolic number line estimation, respectively. The present findings suggest that disentangling between various basic number processing and calculation skills in children might unravel specific relations with both formal and informal home numeracy activities. This might explain earlier reported contradictory findings on the association between home numeracy and mathematical abilities. PMID:29623055
Numerical Relativity, Black Hole Mergers, and Gravitational Waves: Part I
NASA Technical Reports Server (NTRS)
Centrella, Joan
2012-01-01
This series of 3 lectures will present recent developments in numerical relativity, and their applications to simulating black hole mergers and computing the resulting gravitational waveforms. In this first lecture, we introduce the basic ideas of numerical relativity, highlighting the challenges that arise in simulating gravitational wave sources on a computer.
The functional architectures of addition and subtraction: Network discovery using fMRI and DCM.
Yang, Yang; Zhong, Ning; Friston, Karl; Imamura, Kazuyuki; Lu, Shengfu; Li, Mi; Zhou, Haiyan; Wang, Haiyuan; Li, Kuncheng; Hu, Bin
2017-06-01
The neuronal mechanisms underlying arithmetic calculations are not well understood but the differences between mental addition and subtraction could be particularly revealing. Using fMRI and dynamic causal modeling (DCM), this study aimed to identify the distinct neuronal architectures engaged by the cognitive processes of simple addition and subtraction. Our results revealed significantly greater activation during subtraction in regions along the dorsal pathway, including the left inferior frontal gyrus (IFG), middle portion of dorsolateral prefrontal cortex (mDLPFC), and supplementary motor area (SMA), compared with addition. Subsequent analysis of the underlying changes in connectivity - with DCM - revealed a common circuit processing basic (numeric) attributes and the retrieval of arithmetic facts. However, DCM showed that addition was more likely to engage (numeric) retrieval-based circuits in the left hemisphere, while subtraction tended to draw on (magnitude) processing in bilateral parietal cortex, especially the right intraparietal sulcus (IPS). Our findings endorse previous hypotheses about the differences in strategic implementation, dominant hemisphere, and the neuronal circuits underlying addition and subtraction. Moreover, for simple arithmetic, our connectivity results suggest that subtraction calls on more complex processing than addition: auxiliary phonological, visual, and motor processes, for representing numbers, were engaged by subtraction, relative to addition. Hum Brain Mapp 38:3210-3225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Implementationof a modular software system for multiphysical processes in porous media
NASA Astrophysics Data System (ADS)
Naumov, Dmitri; Watanabe, Norihiro; Bilke, Lars; Fischer, Thomas; Lehmann, Christoph; Rink, Karsten; Walther, Marc; Wang, Wenqing; Kolditz, Olaf
2016-04-01
Subsurface georeservoirs are a candidate technology for large scale energy storage required as part of the transition to renewable energy sources. The increased use of the subsurface results in competing interests and possible impacts on protected entities. To optimize and plan the use of the subsurface in large scale scenario analyses,powerful numerical frameworks are required that aid process understanding and can capture the coupled thermal (T), hydraulic (H), mechanical (M), and chemical (C) processes with high computational efficiency. Due to having a multitude of different couplings between basic T, H, M, or C processes and the necessity to implement new numerical schemes the development focus has moved to software's modularity. The decreased coupling between the components results in two major advantages: easier addition of specialized processes and improvement of the code's testability and therefore its quality. The idea of modularization is implemented on several levels, in addition to library based separation of the previous code version, by using generalized algorithms available in the Standard Template Library and the Boost library, relying on efficient implementations of liner algebra solvers, using concepts when designing new types, and localization of frequently accessed data structures. This procedure shows certain benefits for a flexible high-performance framework applied to the analysis of multipurpose georeservoirs.
Unsteady numerical simulations of the stability and dynamics of flames
NASA Technical Reports Server (NTRS)
Kailasanath, K.; Patnaik, G.; Oran, E. S.
1995-01-01
In this report we describe the research performed at the Naval Research Laboratory in support of the NASA Microgravity Science and Applications Program over the past three years (from Feb. 1992) with emphasis on the work performed since the last microgravity combustion workshop. The primary objective of our research is to develop an understanding of the differences in the structure, stability, dynamics and extinction of flames in earth gravity and in microgravity environments. Numerical simulations, in which the various physical and chemical processes can be independently controlled, can significantly advance our understanding of these differences. Therefore, our approach is to use detailed time-dependent, multi-dimensional, multispecies numerical models to perform carefully designed computational experiments. The basic issues we have addressed, a general description of the numerical approach, and a summary of the results are described in this report. More detailed discussions are available in the papers published which are referenced herein. Some of the basic issues we have addressed recently are (1) the relative importance of wall losses and gravity on the extinguishment of downward-propagating flames; (2) the role of hydrodynamic instabilities in the formation of cellular flames; (3) effects of gravity on burner-stabilized flames, and (4) effects of radiative losses and chemical-kinetics on flames near flammability limits. We have also expanded our efforts to include hydrocarbon flames in addition to hydrogen flames and to perform simulations in support of other on-going efforts in the microgravity combustion sciences program. Modeling hydrocarbon flames typically involves a larger number of species and a much larger number of reactions when compared to hydrogen. In addition, more complex radiation models may also be needed. In order to efficiently compute such complex flames recent developments in parallel computing have been utilized to develop a state-of-the-art parallel flame code. This is discussed below in some detail after a brief discussion of the numerical models.
NASA Technical Reports Server (NTRS)
Baldwin, B. S.; Maccormack, R. W.; Deiwert, G. S.
1975-01-01
The time-splitting explicit numerical method of MacCormack is applied to separated turbulent boundary layer flow problems. Modifications of this basic method are developed to counter difficulties associated with complicated geometry and severe numerical resolution requirements of turbulence model equations. The accuracy of solutions is investigated by comparison with exact solutions for several simple cases. Procedures are developed for modifying the basic method to improve the accuracy. Numerical solutions of high-Reynolds-number separated flows over an airfoil and shock-separated flows over a flat plate are obtained. A simple mixing length model of turbulence is used for the transonic flow past an airfoil. A nonorthogonal mesh of arbitrary configuration facilitates the description of the flow field. For the simpler geometry associated with the flat plate, a rectangular mesh is used, and solutions are obtained based on a two-equation differential model of turbulence.
The VIDA Framework as an Education Tool: Leveraging Volcanology Data for Educational Purposes
NASA Astrophysics Data System (ADS)
Faied, D.; Sanchez, A.
2009-04-01
The VIDA Framework as an Education Tool: Leveraging Volcanology Data for Educational Purposes Dohy Faied, Aurora Sanchez (on behalf of SSP08 VAPOR Project Team) While numerous global initiatives exist to address the potential hazards posed by volcanic eruption events and assess impacts from a civil security viewpoint, there does not yet exist a single, unified, international system of early warning and hazard tracking for eruptions. Numerous gaps exist in the risk reduction cycle, from data collection, to data processing, and finally dissemination of salient information to relevant parties. As part of the 2008 International Space University's Space Studies Program, a detailed gap analysis of the state of volcano disaster risk reduction was undertaken, and this paper presents the principal results. This gap analysis considered current sensor technologies, data processing algorithms, and utilization of data products by various international organizations. Recommendations for strategies to minimize or eliminate certain gaps are also provided. In the effort to address the gaps, a framework evolved at system level. This framework, known as VIDA, is a tool to develop user requirements for civil security in hazardous contexts, and a candidate system concept for a detailed design phase. While the basic intention of VIDA is to support disaster risk reduction efforts, there are several methods of leveraging raw science data to support education across a wide demographic. Basic geophysical data could be used to educate school children about the characteristics of volcanoes, satellite mappings could support informed growth and development of societies in at-risk areas, and raw sensor data could contribute to a wide range of university-level research projects. Satellite maps, basic geophysical data, and raw sensor data are combined and accessible in a way that allows the relationships between these data types to be explored and used in a training environment. Such a resource naturally lends itself to research efforts in the subject but also research in operational tools, system architecture, and human/machine interaction in civil protection or emergency scenarios.
The minimal residual QR-factorization algorithm for reliably solving subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.
Photonic single nonlinear-delay dynamical node for information processing
NASA Astrophysics Data System (ADS)
Ortín, Silvia; San-Martín, Daniel; Pesquera, Luis; Gutiérrez, José Manuel
2012-06-01
An electro-optical system with a delay loop based on semiconductor lasers is investigated for information processing by performing numerical simulations. This system can replace a complex network of many nonlinear elements for the implementation of Reservoir Computing. We show that a single nonlinear-delay dynamical system has the basic properties to perform as reservoir: short-term memory and separation property. The computing performance of this system is evaluated for two prediction tasks: Lorenz chaotic time series and nonlinear auto-regressive moving average (NARMA) model. We sweep the parameters of the system to find the best performance. The results achieved for the Lorenz and the NARMA-10 tasks are comparable to those obtained by other machine learning methods.
Numerical grid generation in computational field simulations. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soni, B.K.; Thompson, J.F.; Haeuser, J.
1996-12-31
To enhance the CFS technology to its next level of applicability (i.e., to create acceptance of CFS in an integrated product and process development involving multidisciplinary optimization) the basic requirements are: rapid turn-around time, reliable and accurate simulation, affordability and appropriate linkage to other engineering disciplines. In response to this demand, there has been a considerable growth in the grid generation related research activities involving automization, parallel processing, linkage with the CAD-CAM systems, CFS with dynamic motion and moving boundaries, strategies and algorithms associated with multi-block structured, unstructured, hybrid, hexahedral, and Cartesian grids, along with its applicability to various disciplinesmore » including biomedical, semiconductor, geophysical, ocean modeling, and multidisciplinary optimization.« less
CFD Investigation of Pollutant Emission in Can-Type Combustor Firing Natural Gas, LNG and Syngas
NASA Astrophysics Data System (ADS)
Hasini, H.; Fadhil, SSA; Mat Zian, N.; Om, NI
2016-03-01
CFD investigation of flow, combustion process and pollutant emission using natural gas, liquefied natural gas and syngas of different composition is carried out. The combustor is a can-type combustor commonly used in thermal power plant gas turbine. The investigation emphasis on the comparison of pollutant emission such in particular CO2, and NOx between different fuels. The numerical calculation for basic flow and combustion process is done using the framework of ANSYS Fluent with appropriate model assumptions. Prediction of pollutant species concentration at combustor exit shows significant reduction of CO2 and NOx for syngas combustion compared to conventional natural gas and LNG combustion.
GPU-accelerated computation of electron transfer.
Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco
2012-11-05
Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.
The physics of proton therapy.
Newhauser, Wayne D; Zhang, Rui
2015-04-21
The physics of proton therapy has advanced considerably since it was proposed in 1946. Today analytical equations and numerical simulation methods are available to predict and characterize many aspects of proton therapy. This article reviews the basic aspects of the physics of proton therapy, including proton interaction mechanisms, proton transport calculations, the determination of dose from therapeutic and stray radiations, and shielding design. The article discusses underlying processes as well as selected practical experimental and theoretical methods. We conclude by briefly speculating on possible future areas of research of relevance to the physics of proton therapy.
2004-01-01
phase in November 1996. 1-2. BASIC HYDROGEN PEROXIDE In the early COIL work, either potassium hydroxide (KOH) or sodium hydroxide (NaOH) was the base of...the candidate refrigerants include: R22, R404a, R134a, carbon dioxide, and ammonia. 2-3-3. Surface Evaporator To improve the heat transfer efficiency...monohydrate (LiOH.H20), sodium hydroxide (NaOH), and potassium hydroxide (KOH). The use of solids allows numerous variations of blending sequence and heat
Newhauser, Wayne D; Zhang, Rui
2015-01-01
The physics of proton therapy has advanced considerably since it was proposed in 1946. Today analytical equations and numerical simulation methods are available to predict and characterize many aspects of proton therapy. This article reviews the basic aspects of the physics of proton therapy, including proton interaction mechanisms, proton transport calculations, the determination of dose from therapeutic and stray radiations, and shielding design. The article discusses underlying processes as well as selected practical experimental and theoretical methods. We conclude by briefly speculating on possible future areas of research of relevance to the physics of proton therapy. PMID:25803097
NASA Astrophysics Data System (ADS)
Andrei, Armas; Robert, Beilicci; Erika, Beilicci
2017-10-01
MIKE 11 is an advanced hydroinformatic tool, a professional engineering software package for simulation of one-dimensional flows in estuaries, rivers, irrigation systems, channels and other water bodies. MIKE 11 is a 1-dimensional river model. It was developed by DHI Water · Environment · Health, Denmark. The basic computational procedure of HEC-RAS for steady flow is based on the solution of the one-dimensional energy equation. Energy losses are evaluated by friction and contraction / expansion. The momentum equation may be used in situations where the water surface profile is rapidly varied. These situations include hydraulic jumps, hydraulics of bridges, and evaluating profiles at river confluences. For unsteady flow, HEC-RAS solves the full, dynamic, 1-D Saint Venant Equation using an implicit, finite difference method. The unsteady flow equation solver was adapted from Dr. Robert L. Barkau’s UNET package. Fluid motion is controlled by the basic principles of conservation of mass, energy and momentum, which form the basis of fluid mechanics and hydraulic engineering. Complex flow situations must be solved using empirical approximations and numerical models, which are based on derivations of the basic principles (backwater equation, Navier-Stokes equation etc.). All numerical models are required to make some form of approximation to solve these principles, and consequently all have their limitations. The study of hydraulics and fluid mechanics is founded on the three basic principles of conservation of mass, energy and momentum. Real-life situations are frequently too complex to solve without the aid of numerical models. There is a tendency among some engineers to discard the basic principles taught at university and blindly assume that the results produced by the model are correct. Regardless of the complexity of models and despite the claims of their developers, all numerical models are required to make approximations. These may be related to geometric limitations, numerical simplification, or the use of empirical correlations. Some are obvious: one-dimensional models must average properties over the two remaining directions. It is the less obvious and poorly advertised approximations that pose the greatest threat to the novice user. Some of these, such as the inability of one-dimensional unsteady models to simulate supercritical flow can cause significant inaccuracy in the model predictions.
NASA Astrophysics Data System (ADS)
Naumov, D.; Fischer, T.; Böttcher, N.; Watanabe, N.; Walther, M.; Rink, K.; Bilke, L.; Shao, H.; Kolditz, O.
2014-12-01
OpenGeoSys (OGS) is a scientific open source code for numerical simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media. Its basic concept is to provide a flexible numerical framework for solving multi-field problems for applications in geoscience and hydrology as e.g. for CO2 storage applications, geothermal power plant forecast simulation, salt water intrusion, water resources management, etc. Advances in computational mathematics have revolutionized the variety and nature of the problems that can be addressed by environmental scientists and engineers nowadays and an intensive code development in the last years enables in the meantime the solutions of much larger numerical problems and applications. However, solving environmental processes along the water cycle at large scales, like for complete catchment or reservoirs, stays computationally still a challenging task. Therefore, we started a new OGS code development with focus on execution speed and parallelization. In the new version, a local data structure concept improves the instruction and data cache performance by a tight bundling of data with an element-wise numerical integration loop. Dedicated analysis methods enable the investigation of memory-access patterns in the local and global assembler routines, which leads to further data structure optimization for an additional performance gain. The concept is presented together with a technical code analysis of the recent development and a large case study including transient flow simulation in the unsaturated / saturated zone of the Thuringian Syncline, Germany. The analysis is performed on a high-resolution mesh (up to 50M elements) with embedded fault structures.
Extending BPM Environments of Your Choice with Performance Related Decision Support
NASA Astrophysics Data System (ADS)
Fritzsche, Mathias; Picht, Michael; Gilani, Wasif; Spence, Ivor; Brown, John; Kilpatrick, Peter
What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools, process optimizations or a combination of such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.
Spatial and Numerical Abilities without a Complete Natural Language
ERIC Educational Resources Information Center
Hyde, Daniel C.; Winkler-Rhoades, Nathan; Lee, Sang-Ah; Izard, Veronique; Shapiro, Kevin A.; Spelke, Elizabeth S.
2011-01-01
We studied the cognitive abilities of a 13-year-old deaf child, deprived of most linguistic input from late infancy, in a battery of tests designed to reveal the nature of numerical and geometrical abilities in the absence of a full linguistic system. Tests revealed widespread proficiency in basic symbolic and non-symbolic numerical computations…
Głuszcz, Paweł; Petera, Jerzy; Ledakowicz, Stanisław
2011-03-01
The mathematical model of the integrated process of mercury contaminated wastewater bioremediation in a fixed-bed industrial bioreactor is presented. An activated carbon packing in the bioreactor plays the role of an adsorbent for ionic mercury and at the same time of a carrier material for immobilization of mercury-reducing bacteria. The model includes three basic stages of the bioremediation process: mass transfer in the liquid phase, adsorption of mercury onto activated carbon and ionic mercury bioreduction to Hg(0) by immobilized microorganisms. Model calculations were verified using experimental data obtained during the process of industrial wastewater bioremediation in the bioreactor of 1 m³ volume. It was found that the presented model reflects the properties of the real system quite well. Numerical simulation of the bioremediation process confirmed the experimentally observed positive effect of the integration of ionic mercury adsorption and bioreduction in one apparatus.
Sasanguie, Delphine; Göbel, Silke M; Moll, Kristina; Smets, Karolien; Reynvoet, Bert
2013-03-01
In this study, the performance of typically developing 6- to 8-year-old children on an approximate number discrimination task, a symbolic comparison task, and a symbolic and nonsymbolic number line estimation task was examined. For the first time, children's performances on these basic cognitive number processing tasks were explicitly contrasted to investigate which of them is the best predictor of their future mathematical abilities. Math achievement was measured with a timed arithmetic test and with a general curriculum-based math test to address the additional question of whether the predictive association between the basic numerical abilities and mathematics achievement is dependent on which math test is used. Results revealed that performance on both mathematics achievement tests was best predicted by how well childrencompared digits. In addition, an association between performance on the symbolic number line estimation task and math achievement scores for the general curriculum-based math test measuring a broader spectrum of skills was found. Together, these results emphasize the importance of learning experiences with symbols for later math abilities. Copyright © 2012 Elsevier Inc. All rights reserved.
Workplace Math I: Easing into Math.
ERIC Educational Resources Information Center
Wilson, Nancy; Goschen, Claire
This basic skills learning module includes instruction in performing basic computations, using general numerical concepts such as whole numbers, fractions, decimals, averages, ratios, proportions, percentages, and equivalents in practical situations. The problems are relevant to all aspects of the printing and manufacturing industry, with emphasis…
NASA Astrophysics Data System (ADS)
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Towards effective interactive three-dimensional colour postprocessing
NASA Technical Reports Server (NTRS)
Bailey, B. C.; Hajjar, J. F.; Abel, J. F.
1986-01-01
Recommendations for the development of effective three-dimensional, graphical color postprocessing are made. First, the evaluation of large, complex numerical models demands that a postprocessor be highly interactive. A menu of available functions should be provided and these operations should be performed quickly so that a sense of continuity and spontaneity exists during the post-processing session. Second, an agenda for three-dimensional color postprocessing is proposed. A postprocessor must be versatile with respect to application and basic algorithms must be designed so that they are flexible. A complete selection of tools is necessary to allow arbitrary specification of views, extraction of qualitative information, and access to detailed quantitative and problem information. Finally, full use of advanced display hardware is necessary if interactivity is to be maximized and effective postprocessing of today's numerical simulations is to be achieved.
Geary, David C; Hoard, Mary K; Nugent, Lara; Rouder, Jeffrey N
2015-12-01
The relation between performance on measures of algebraic cognition and acuity of the approximate number system (ANS) and memory for addition facts was assessed for 171 ninth graders (92 girls) while controlling for parental education, sex, reading achievement, speed of numeral processing, fluency of symbolic number processing, intelligence, and the central executive component of working memory. The algebraic tasks assessed accuracy in placing x,y pairs in the coordinate plane, speed and accuracy of expression evaluation, and schema memory for algebra equations. ANS acuity was related to accuracy of placements in the coordinate plane and expression evaluation but not to schema memory. Frequency of fact retrieval errors was related to schema memory but not to coordinate plane or expression evaluation accuracy. The results suggest that the ANS may contribute to or be influenced by spatial-numerical and numerical-only quantity judgments in algebraic contexts, whereas difficulties in committing addition facts to long-term memory may presage slow formation of memories for the basic structure of algebra equations. More generally, the results suggest that different brain and cognitive systems are engaged during the learning of different components of algebraic competence while controlling for demographic and domain general abilities. Copyright © 2015 Elsevier Inc. All rights reserved.
Static Load Test on Instrumented Pile - Field Data and Numerical Simulations
NASA Astrophysics Data System (ADS)
Krasiński, Adam; Wiszniewski, Mateusz
2017-09-01
Static load tests on foundation piles are generally carried out in order to determine load - the displacement characteristic of the pile head. For standard (basic) engineering practices this type of test usually provides enough information. However, the knowledge of force distribution along the pile core and its division into the friction along the shaft and the resistance under the base can be very useful. Such information can be obtained by strain gage pile instrumentation [1]. Significant investigations have been completed on this technology, proving its utility and correctness [8], [10], [12]. The results of static tests on instrumented piles are not easy to interpret. There are many factors and processes affecting the final outcome. In order to understand better the whole testing process and soil-structure behavior some investigations and numerical analyses were done. In the paper, real data from a field load test on instrumented piles is discussed and compared with numerical simulation of such a test in similar conditions. Differences and difficulties in the results interpretation with their possible reasons are discussed. Moreover, the authors used their own analytical solution for more reliable determination of force distribution along the pile. The work was presented at the XVII French-Polish Colloquium of Soil and Rock Mechanics, Łódź, 28-30 November 2016.
ERIC Educational Resources Information Center
Georgia Governor's Education Review Commission, Atlanta.
This report defines what is meant by quality basic education in Georgia and makes numerous recommendations for achieving it for all Georgians. The recommendations are that: (1) basic skills and general job skills be emphasized in vocational education; (2) the salary base for teachers be increased; (3) a five plateau teacher career ladder be…
Sparking young minds with Moon rocks and meteorites
NASA Technical Reports Server (NTRS)
Taylor, G. Jeffrey; Lindstrom, Marilyn M.
1993-01-01
What could be more exciting than seeing pieces of other worlds? The Apollo program left a legacy of astounding accomplishments and precious samples. Part of the thrill of those lunar missions is brought to schools by the lunar sample educational disks, which contain artifacts of six piloted trips to the Moon. Johnson Space Center (JSC) is preparing 100 new educational disks containing pieces of meteorites collected in Antarctica. These represent chunks of several different asteroids, that were collected in one of the most remote, forbidding environments on Earth. These pieces of the Moon and asteroids represent the products of basic planetary processes (solar nebular processes, initial differentiation, volcanism, and impact), and, in turn, these processes are controlled by basic physical and chemical processes (energy, energy transfer, melting, buoyancy, etc.). Thus, the lunar and meteorite sample disks have enormous educational potential. New educational materials are being developed to accompany the disks. Present materials are not as effective as they could be, especially in relating samples to processes and to other types of data such as spectral studies and photogeology. Furthermore, the materials are out of date. New background materials will be produced for teachers, assembling slide sets with extensive captions, and devising numerous hands-on classroom activities to do while the disks are at a school and before and after they arrive. The classroom activities will be developed by teams of experienced teachers working with lunar and meteorite experts.
Global simulation of the Czochralski silicon crystal growth in ANSYS FLUENT
NASA Astrophysics Data System (ADS)
Kirpo, Maksims
2013-05-01
Silicon crystals for high efficiency solar cells are produced mainly by the Czochralski (CZ) crystal growth method. Computer simulations of the CZ process established themselves as a basic tool for optimization of the growth process which allows to reduce production costs keeping high quality of the crystalline material. The author shows the application of the general Computational Fluid Dynamics (CFD) code ANSYS FLUENT to solution of the static two-dimensional (2D) axisymmetric global model of the small industrial furnace for growing of silicon crystals with a diameter of 100 mm. The presented numerical model is self-sufficient and incorporates the most important physical phenomena of the CZ growth process including latent heat generation during crystallization, crystal-melt interface deflection, turbulent heat and mass transport, oxygen transport, etc. The demonstrated approach allows to find the heater power for the specified pulling rate of the crystal but the obtained power values are smaller than those found in the literature for the studied furnace. However, the described approach is successfully verified with the respect to the heater power by its application for the numerical simulations of the real CZ pullers by "Bosch Solar Energy AG".
Reinventing Biostatistics Education for Basic Scientists
Weissgerber, Tracey L.; Garovic, Vesna D.; Milin-Lazovic, Jelena S.; Winham, Stacey J.; Obradovic, Zoran; Trzeciakowski, Jerome P.; Milic, Natasa M.
2016-01-01
Numerous studies demonstrating that statistical errors are common in basic science publications have led to calls to improve statistical training for basic scientists. In this article, we sought to evaluate statistical requirements for PhD training and to identify opportunities for improving biostatistics education in the basic sciences. We provide recommendations for improving statistics training for basic biomedical scientists, including: 1. Encouraging departments to require statistics training, 2. Tailoring coursework to the students’ fields of research, and 3. Developing tools and strategies to promote education and dissemination of statistical knowledge. We also provide a list of statistical considerations that should be addressed in statistics education for basic scientists. PMID:27058055
Bemis, Douglas K.; Pylkkänen, Liina
2013-01-01
Debates surrounding the evolution of language often hinge upon its relationship to cognition more generally and many investigations have attempted to demark the boundary between the two. Though results from these studies suggest that language may recruit domain-general mechanisms during certain types of complex processing, the domain-generality of basic combinatorial mechanisms that lie at the core of linguistic processing is still unknown. Our previous work (Bemis and Pylkkänen, 2011, 2012) used magnetoencephalography to isolate neural activity associated with the simple composition of an adjective and a noun (“red boat”) and found increased activity during this processing localized to the left anterior temporal lobe (lATL), ventro-medial prefrontal cortex (vmPFC), and left angular gyrus (lAG). The present study explores the domain-generality of these effects and their associated combinatorial mechanisms through two parallel non-linguistic combinatorial tasks designed to be as minimal and natural as the linguistic paradigm. In the first task, we used pictures of colored shapes to elicit combinatorial conceptual processing similar to that evoked by the linguistic expressions and find increased activity again localized to the vmPFC during combinatorial processing. This result suggests that a domain-general semantic combinatorial mechanism operates during basic linguistic composition, and that activity generated by its processing localizes to the vmPFC. In the second task, we recorded neural activity as subjects performed simple addition between two small numerals. Consistent with a wide array of recent results, we find no effects related to basic addition that coincide with our linguistic effects and instead find increased activity localized to the intraparietal sulcus. This result suggests that the scope of the previously identified linguistic effects is restricted to compositional operations and does not extend generally to all tasks that are merely similar in form. PMID:23293621
Riding the Waves: How Our Cells Send Signals | Center for Cancer Research
The ability of cells to perceive and respond to their environment is critical in order to maintain basic cellular functions such as development, tissue repair, and response to stress. This process happens through a complex system of communication, called cell signaling, which governs basic cellular activities and coordinates cell actions. Errors in cell signaling have been linked to numerous diseases, including cancer. NF-κB is a protein complex that plays a critical role in many cell signaling pathways by controlling gene activation. It is widely used by cells to regulate cell growth and survival and helps to protect the cell from conditions that would otherwise cause it to die. Many tumor cells have mutations in genes that cause NF-κB to become overactive. Blocking NF-κB could cause tumor cells to stop growing, die, or become more sensitive to therapeutics.
NASA Technical Reports Server (NTRS)
Harten, A.; Tal-Ezer, H.
1981-01-01
An implicit finite difference method of fourth order accuracy in space and time is introduced for the numerical solution of one-dimensional systems of hyperbolic conservation laws. The basic form of the method is a two-level scheme which is unconditionally stable and nondissipative. The scheme uses only three mesh points at level t and three mesh points at level t + delta t. The dissipative version of the basic method given is conditionally stable under the CFL (Courant-Friedrichs-Lewy) condition. This version is particularly useful for the numerical solution of problems with strong but nonstiff dynamic features, where the CFL restriction is reasonable on accuracy grounds. Numerical results are provided to illustrate properties of the proposed method.
Number games, magnitude representation, and basic number skills in preschoolers.
Whyte, Jemma Catherine; Bull, Rebecca
2008-03-01
The effect of 3 intervention board games (linear number, linear color, and nonlinear number) on young children's (mean age = 3.8 years) counting abilities, number naming, magnitude comprehension, accuracy in number-to-position estimation tasks, and best-fit numerical magnitude representations was examined. Pre- and posttest performance was compared following four 25-min intervention sessions. The linear number board game significantly improved children's performance in all posttest measures and facilitated a shift from a logarithmic to a linear representation of numerical magnitude, emphasizing the importance of spatial cues in estimation. Exposure to the number card games involving nonsymbolic magnitude judgments and association of symbolic and nonsymbolic quantities, but without any linear spatial cues, improved some aspects of children's basic number skills but not numerical estimation precision.
Moll, Kristina; Göbel, Silke M; Snowling, Margaret J
2015-01-01
As well as being the hallmark of mathematics disorders, deficits in number processing have also been reported for individuals with reading disorders. The aim of the present study was to investigate separately the components of numerical processing affected in reading and mathematical disorders within the framework of the Triple Code Model. Children with reading disorders (RD), mathematics disorders (MD), comorbid deficits (RD + MD), and typically developing children (TD) were tested on verbal, visual-verbal, and nonverbal number tasks. As expected, children with MD were impaired across a broad range of numerical tasks. In contrast, children with RD were impaired in (visual-)verbal number tasks but showed age-appropriate performance in nonverbal number skills, suggesting their impairments were domain specific and related to their reading difficulties. The comorbid group showed an additive profile of the impairments of the two single-deficit groups. Performance in speeded verbal number tasks was related to rapid automatized naming, a measure of visual-verbal access in the RD but not in the MD group. The results indicate that deficits in number skills are due to different underlying cognitive deficits in children with RD compared to children with MD: a phonological deficit in RD and a deficit in processing numerosities in MD.
34 CFR 668.142 - Special definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... General learned abilities: Cognitive operations, such as deductive reasoning, reading comprehension, or translation from graphic to numerical representation, that may be learned in both school and non-school...,” “curricula,” or “basic verbal and quantitative skills,” the basic knowledge or skills generally learned in...
34 CFR 668.142 - Special definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... General learned abilities: Cognitive operations, such as deductive reasoning, reading comprehension, or translation from graphic to numerical representation, that may be learned in both school and non-school...,” “curricula,” or “basic verbal and quantitative skills,” the basic knowledge or skills generally learned in...
34 CFR 668.142 - Special definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... General learned abilities: Cognitive operations, such as deductive reasoning, reading comprehension, or translation from graphic to numerical representation, that may be learned in both school and non-school...,” “curricula,” or “basic verbal and quantitative skills,” the basic knowledge or skills generally learned in...
34 CFR 668.142 - Special definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... General learned abilities: Cognitive operations, such as deductive reasoning, reading comprehension, or translation from graphic to numerical representation, that may be learned in both school and non-school...,” “curricula,” or “basic verbal and quantitative skills,” the basic knowledge or skills generally learned in...
NASA Astrophysics Data System (ADS)
Schomer, Laura; Liewald, Mathias; Riedmüller, Kim Rouven
2018-05-01
Metal-ceramic Interpenetrating Phase Composites (IPC) belong to a special subcategory of composite materials and reveal enhanced properties compared to conventional composite materials. Currently, IPC are produced by infiltration of a ceramic open-pore body with liquid metal applying high pressure and I or high temperature to avoid residual porosity. However, these IPC are not able to gain their complete potential, because of structural damages and interface reactions occurring during the manufacturing process. Compared to this, the manufacturing of IPC using the semi-solid forming technology offers great perspectives due to relative low processing temperatures and reduced mechanical pressure. In this context, this paper is focusing on numerical investigations conducted by using the FLOW-3D software for gaining a deeper understanding of the infiltration of open-pore bodies with semi-solid materials. For flow simulation analysis, a geometric model and different porous media drag models have been used. They have been adjusted and compared to get a precise description of the infiltration process. Based on these fundamental numerical investigations, this paper also shows numerical investigations that were used for basically designing a semi-solid forming tool. Thereby, the development of the flow front and the pressure during the infiltration represent the basis of the evaluation. The use of an open and closed tool cavity combined with various geometries of the upper die shows different results relating to these evaluation arguments. Furthermore, different overflows were designed and its effects on the pressure at the end of the infiltration process were investigated. Thus, this paper provides a general guideline for a tool design for manufacturing of metal-ceramic IPC using semi-solid forming.
Reliability and Validity of Nonsymbolic and Symbolic Comparison Tasks in School-Aged Children.
Castro, Danilka; Estévez, Nancy; Gómez, David; Dartnell, Pablo Ricardo
2017-12-04
Basic numerical processing has been regularly assessed using numerical nonsymbolic and symbolic comparison tasks. It has been assumed that these tasks index similar underlying processes. However, the evidence concerning the reliability and convergent validity across different versions of these tasks is inconclusive. We explored the reliability and convergent validity between two numerical comparison tasks (nonsymbolic vs. symbolic) in school-aged children. The relations between performance in both tasks and mental arithmetic were described and a developmental trajectories' analysis was also conducted. The influence of verbal and visuospatial working memory processes and age was controlled for in the analyses. Results show significant reliability (p < .001) between Block 1 and 2 for nonsymbolic task (global adjusted RT (adjRT): r = .78, global efficiency measures (EMs): r = .74) and, for symbolic task (adjRT: r = .86, EMs: r = .86). Also, significant convergent validity between tasks (p < .001) for both adjRT (r = .71) and EMs (r = .70) were found after controlling for working memory and age. Finally, it was found the relationship between nonsymbolic and symbolic efficiencies varies across the sample's age range. Overall, these findings suggest both tasks index the same underlying cognitive architecture and are appropriate to explore the Approximate Number System (ANS) characteristics. The evidence supports the central role of ANS in arithmetic efficiency and suggests there are differences across the age range assessed, concerning the extent to which efficiency in nonsymbolic and symbolic tasks reflects ANS acuity.
The effects on the ionosphere of inertia in the high latitude neutral thermosphere
NASA Technical Reports Server (NTRS)
Burns, Alan; Killeen, Timothy
1993-01-01
High-latitude ionospheric currents, plasma temperatures, densities, and composition are all affected by the time-dependent response of the neutral thermosphere to ion drag and Joule heating through a variety of complex feedback processes. These processes can best be studied numerically using the appropriate nonlinear numerical modeling techniques in conjunction with experimental case studies. In particular, the basic physics of these processes can be understood using a model, and these concepts can then be applied to more complex realistic situations by developing the appropriate simulations of real events. Finally, these model results can be compared with satellite-derived data from the thermosphere. We used numerical simulations from the National Center of Atmospheric Research Thermosphere/Ionosphere General Circulation Model (NCAR TIGCM) and data from the Dynamic Explorer 2 (DE 2) satellite to study the time-dependent effects of the inertia of the neutral thermosphere on ionospheric currents, plasma temperatures, densities, and composition. One particular case of these inertial effects is the so-called 'fly-wheel effect'. This effect occurs when the neutral gas, that has been spun-up by the large ionospheric winds associated with a geomagnetic storm, moves faster than the ions in the period after the end of the main phase of the storm. In these circumstances, the neutral gas can drag the ions along with them. It is this last effect, which is described in the next section, that we have studied under this grant.
Impact of basic angle variations on the parallax zero point for a scanning astrometric satellite
NASA Astrophysics Data System (ADS)
Butkevich, Alexey G.; Klioner, Sergei A.; Lindegren, Lennart; Hobbs, David; van Leeuwen, Floor
2017-07-01
Context. Determination of absolute parallaxes by means of a scanning astrometric satellite such as Hipparcos or Gaia relies on the short-term stability of the so-called basic angle between the two viewing directions. Uncalibrated variations of the basic angle may produce systematic errors in the computed parallaxes. Aims: We examine the coupling between a global parallax shift and specific variations of the basic angle, namely those related to the satellite attitude with respect to the Sun. Methods: The changes in observables produced by small perturbations of the basic angle, attitude, and parallaxes were calculated analytically. We then looked for a combination of perturbations that had no net effect on the observables. Results: In the approximation of infinitely small fields of view, it is shown that certain perturbations of the basic angle are observationally indistinguishable from a global shift of the parallaxes. If these kinds of perturbations exist, they cannot be calibrated from the astrometric observations but will produce a global parallax bias. Numerical simulations of the astrometric solution, using both direct and iterative methods, confirm this theoretical result. For a given amplitude of the basic angle perturbation, the parallax bias is smaller for a larger basic angle and a larger solar aspect angle. In both these respects Gaia has a more favourable geometry than Hipparcos. In the case of Gaia, internal metrology is used to monitor basic angle variations. Additionally, Gaia has the advantage of detecting numerous quasars, which can be used to verify the parallax zero point.
Modelling of hydrogen permeability of membranes for high-purity hydrogen production
NASA Astrophysics Data System (ADS)
Zaika, Yury V.; Rodchenkova, Natalia I.
2017-11-01
High-purity hydrogen is required for clean energy and a variety of chemical technology processes. Different alloys, which may be well-suited for use in gas-separation plants, were investigated by measuring specific hydrogen permeability. One had to estimate the parameters of diffusion and sorption to numerically model the different scenarios and experimental conditions of the material usage (including extreme ones), and identify the limiting factors. This paper presents a nonlinear mathematical model taking into account the dynamics of sorption-desorption processes and reversible capture of diffusing hydrogen by inhomogeneity of the material’s structure, and also modification of the model when the transport rate is high. The results of numerical modelling allow to obtain information about output data sensitivity with respect to variations of the material’s hydrogen permeability parameters. Furthermore, it is possible to analyze the dynamics of concentrations and fluxes that cannot be measured directly. Experimental data for Ta77Nb23 and V85Ni15 alloys were used to test the model. This work is supported by the Russian Foundation for Basic Research (Project No. 15-01-00744).
Lewis, F.M.; Voss, C.I.; Rubin, Jacob
1986-01-01
A model was developed that can simulate the effect of certain chemical and sorption reactions simultaneously among solutes involved in advective-dispersive transport through porous media. The model is based on a methodology that utilizes physical-chemical relationships in the development of the basic solute mass-balance equations; however, the form of these equations allows their solution to be obtained by methods that do not depend on the chemical processes. The chemical environment is governed by the condition of local chemical equilibrium, and may be defined either by the linear sorption of a single species and two soluble complexation reactions which also involve that species, or binary ion exchange and one complexation reaction involving a common ion. Partial differential equations that describe solute mass balance entirely in the liquid phase are developed for each tenad (a chemical entity whose total mass is independent of the reaction process) in terms of their total dissolved concentration. These equations are solved numerically in two dimensions through the modification of an existing groundwater flow/transport computer code. (Author 's abstract)
Coz, Alberto; Llano, Tamara; Cifrián, Eva; Viguri, Javier; Maican, Edmond; Sixta, Herbert
2016-01-01
The complete bioconversion of the carbohydrate fraction is of great importance for a lignocellulosic-based biorefinery. However, due to the structure of the lignocellulosic materials, and depending basically on the main parameters within the pretreatment steps, numerous byproducts are generated and they act as inhibitors in the fermentation operations. In this sense, the impact of inhibitory compounds derived from lignocellulosic materials is one of the major challenges for a sustainable biomass-to-biofuel and -bioproduct industry. In order to minimise the negative effects of these compounds, numerous methodologies have been tested including physical, chemical, and biological processes. The main physical and chemical treatments have been studied in this work in relation to the lignocellulosic material and the inhibitor in order to point out the best mechanisms for fermenting purposes. In addition, special attention has been made in the case of lignocellulosic hydrolysates obtained by chemical processes with SO2, due to the complex matrix of these materials and the increase in these methodologies in future biorefinery markets. Recommendations of different detoxification methods have been given. PMID:28773700
Effects of food processing on food allergens.
Sathe, Shridhar K; Sharma, Girdhari M
2009-08-01
Food allergies are on the rise in Western countries. With the food allergen labeling requirements in the US and EU, there is an interest in learning how food processing affects food allergens. Numerous foods are processed in different ways at home, in institutional settings, and in industry. Depending on the processing method and the food, partial or complete removal of the offending allergen may be possible as illustrated by reduction of peanut allergen in vitro IgE immunoreactivity upon soaking and blanching treatments. When the allergen is discretely located in a food, one may physically separate and remove it from the food. For example, lye peeling has been reported to produce hypoallergenic peach nectar. Protein denaturation and/or hydrolysis during food processing can be used to produce hypoallergenic products. This paper provides a short overview of basic principles of food processing followed by examples of their effects on food allergen stability. Reviewed literature suggests assessment of processing effects on clinically relevant reactivity of food allergens is warranted.
Spray drying formulation of amorphous solid dispersions.
Singh, Abhishek; Van den Mooter, Guy
2016-05-01
Spray drying is a well-established manufacturing technique which can be used to formulate amorphous solid dispersions (ASDs) which is an effective strategy to deliver poorly water soluble drugs (PWSDs). However, the inherently complex nature of the spray drying process coupled with specific characteristics of ASDs makes it an interesting area to explore. Numerous diverse factors interact in an inter-dependent manner to determine the final product properties. This review discusses the basic background of ASDs, various formulation and process variables influencing the critical quality attributes (CQAs) of the ASDs and aspects of downstream processing. Also various aspects of spray drying such as instrumentation, thermodynamics, drying kinetics, particle formation process and scale-up challenges are included. Recent advances in the spray-based drying techniques are mentioned along with some future avenues where major research thrust is needed. Copyright © 2015 Elsevier B.V. All rights reserved.
Bioreactor concepts for cell culture-based viral vaccine production.
Gallo-Ramírez, Lilí Esmeralda; Nikolay, Alexander; Genzel, Yvonne; Reichl, Udo
2015-01-01
Vaccine manufacturing processes are designed to meet present and upcoming challenges associated with a growing vaccine market and to include multi-use facilities offering a broad portfolio and faster reaction times in case of pandemics and emerging diseases. The final products, from whole viruses to recombinant viral proteins, are very diverse, making standard process strategies hardly universally applicable. Numerous factors such as cell substrate, virus strain or expression system, medium, cultivation system, cultivation method, and scale need consideration. Reviewing options for efficient and economical production of human vaccines, this paper discusses basic factors relevant for viral antigen production in mammalian cells, avian cells and insect cells. In addition, bioreactor concepts, including static systems, single-use systems, stirred tanks and packed-beds are addressed. On this basis, methods towards process intensification, in particular operational strategies, the use of perfusion systems for high product yields, and steps to establish continuous processes are introduced.
Some Basic Techniques in Bioimpedance Research
NASA Astrophysics Data System (ADS)
Martinsen, Ørjan G.
2004-09-01
Any physiological or anatomical changes in a biological material will also change its electrical properties. Hence, bioimpedance measurements can be used for diagnosing or classification of tissue. Applications are numerous within medicine, biology, cosmetics, food industry, sports, etc, and different basic approaches for the development of bioimpedance techniques are discussed in this paper.
Inventory of Data Sources in Science and Technology. A Preliminary Survey.
ERIC Educational Resources Information Center
International Council of Scientific Unions, Paris (France).
Provided in this inventory are sources of numerical or factual data in selected fields of basic science and applied science/technology. The objective of the inventory is to provide organizations and individuals (scientists, engineers, and information specialists), particularly those in developing countries, with basic data sources relevant to…
Noël, Marie-Pascale; Rousselle, Laurence
2011-01-01
Studies on developmental dyscalculia (DD) have tried to identify a basic numerical deficit that could account for this specific learning disability. The first proposition was that the number magnitude representation of these children was impaired. However, Rousselle and Noël (2007) brought data showing that this was not the case but rather that these children were impaired when processing the magnitude of symbolic numbers only. Since then, incongruent results have been published. In this paper, we will propose a developmental perspective on this issue. We will argue that the first deficit shown in DD regards the building of an exact representation of numerical value, thanks to the learning of symbolic numbers, and that the reduced acuity of the approximate number magnitude system appears only later and is secondary to the first deficit. PMID:22203797
Noël, Marie-Pascale; Rousselle, Laurence
2011-01-01
Studies on developmental dyscalculia (DD) have tried to identify a basic numerical deficit that could account for this specific learning disability. The first proposition was that the number magnitude representation of these children was impaired. However, Rousselle and Noël (2007) brought data showing that this was not the case but rather that these children were impaired when processing the magnitude of symbolic numbers only. Since then, incongruent results have been published. In this paper, we will propose a developmental perspective on this issue. We will argue that the first deficit shown in DD regards the building of an exact representation of numerical value, thanks to the learning of symbolic numbers, and that the reduced acuity of the approximate number magnitude system appears only later and is secondary to the first deficit.
Numerical simulation of filling a magnetic flux tube with a cold plasma: Anomalous plasma effects
NASA Technical Reports Server (NTRS)
Singh, Nagendra; Leung, W. C.
1995-01-01
Large-scale models of plasmaspheric refilling have revealed that during the early stage of the refilling counterstreaming ion beams are a common feature. However, the instability of such ion beams and its effect on refilling remain unexplored. In order to learn the basic effects of ion beam instabilities on refilling, we have performed numerical simulations of the refilling of an artificial magnetic flux tube. (The shape and size of the tube are assumed so that the essential features of the refilling problem are kept in the simulation and at the same time the small scale processes driven by the ion beams are sufficiently resolved.) We have also studied the effect of commonly found equatorially trapped warm and/or hot plasma on the filling of a flux tube with a cold plasma. Three types of simulation runs have been performed.
Education and research in fluid dynamics
NASA Astrophysics Data System (ADS)
López González-Nieto, P.; Redondo, J. M.; Cano, J. L.
2009-04-01
Fluid dynamics constitutes an essential subject for engineering, since auronautic engineers (airship flights in PBL, flight processes), industrial engineers (fluid transportation), naval engineers (ship/vessel building) up to agricultural engineers (influence of the weather conditions on crops/farming). All the above-mentioned examples possess a high social and economic impact on mankind. Therefore, the fluid dynamics education of engineers is very important, and, at the same time, this subject gives us an interesting methodology based on a cycle relation among theory, experiments and numerical simulation. The study of turbulent plumes -a very important convective flow- is a good example because their theoretical governing equations are simple; it is possible to make experimental plumes in an aesy way and to carry out the corresponding numerical simulatons to verify experimental and theoretical results. Moreover, it is possible to get all these aims in the educational system (engineering schools or institutions) using a basic laboratory and the "Modellus" software.
Riemann Solvers in Relativistic Hydrodynamics: Basics and Astrophysical Applications
NASA Astrophysics Data System (ADS)
Ibanez, Jose M.
2001-12-01
My contribution to these proceedings summarizes a general overview on t High Resolution Shock Capturing methods (HRSC) in the field of relativistic hydrodynamics with special emphasis on Riemann solvers. HRSC techniques achieve highly accurate numerical approximations (formally second order or better) in smooth regions of the flow, and capture the motion of unresolved steep gradients without creating spurious oscillations. In the first part I will show how these techniques have been extended to relativistic hydrodynamics, making it possible to explore some challenging astrophysical scenarios. I will review recent literature concerning the main properties of different special relativistic Riemann solvers, and discuss several 1D and 2D test problems which are commonly used to evaluate the performance of numerical methods in relativistic hydrodynamics. In the second part I will illustrate the use of HRSC methods in several astrophysical applications where special and general relativistic hydrodynamical processes play a crucial role.
Permanent-magnet linear alternators. I - Fundamental equations. II - Design guidelines
NASA Astrophysics Data System (ADS)
Boldea, I.; Nasar, S. A.
1987-01-01
The general equations of permanent-magnet heteropolar three-phase and single-phase linear alternators, powered by free-piston Stirling engines, are presented, with application to space power stations and domestic applications including solar power plants. The equations are applied to no-load and short-circuit conditions, illustrating the end-effect caused by the speed-reversal process. In the second part, basic design guidelines for a three-phase tubular linear alternator are given, and the procedure is demonstrated with the numerical example of the design of a 25-kVA, 14.4-m/s, 120/220-V, 60-Hz alternator.
2012-09-14
nitro- gen impregnation is a complex process, as retaining the ni- trogen, which can exist in numerous forms—many of which are not basic (e.g., pyrrole ...identity 398.1 35.4 Pyridinic 400.7 57.3 Pyrrolic 403.1 7.3 Pyridine-N-oxide ACFC. The coal-derived BPL™ has been measured to con- tain a significant amount...functionalities as pyrrolic functionalities (Boudou 2003). Others have shown simi- lar results for ammonia-treated carbons (Stohr et al. 1991; Mangun
Exact solution of some linear matrix equations using algebraic methods
NASA Technical Reports Server (NTRS)
Djaferis, T. E.; Mitter, S. K.
1979-01-01
Algebraic methods are used to construct the exact solution P of the linear matrix equation PA + BP = - C, where A, B, and C are matrices with real entries. The emphasis of this equation is on the use of finite algebraic procedures which are easily implemented on a digital computer and which lead to an explicit solution to the problem. The paper is divided into six sections which include the proof of the basic lemma, the Liapunov equation, and the computer implementation for the rational, integer and modular algorithms. Two numerical examples are given and the entire calculation process is depicted.
Dexter: Data Extractor for scanned graphs
NASA Astrophysics Data System (ADS)
Demleitner, Markus
2011-12-01
The NASA Astrophysics Data System (ADS) now holds 1.3 million scanned pages, containing numerous plots and figures for which the original data sets are lost or inaccessible. The availability of scans of the figures can significantly ease the regeneration of the data sets. For this purpose, the ADS has developed Dexter, a Java applet that supports the user in this process. Dexter's basic functionality is to let the user manually digitize a plot by marking points and defining the coordinate transformation from the logical to the physical coordinate system. Advanced features include automatic identification of axes, tracing lines and finding points matching a template.
Planetary spacecraft cost modeling utilizing labor estimating relationships
NASA Technical Reports Server (NTRS)
Williams, Raymond
1990-01-01
A basic computerized technology is presented for estimating labor hours and cost of unmanned planetary and lunar programs. The user friendly methodology designated Labor Estimating Relationship/Cost Estimating Relationship (LERCER) organizes the forecasting process according to vehicle subsystem levels. The level of input variables required by the model in predicting cost is consistent with pre-Phase A type mission analysis. Twenty one program categories were used in the modeling. To develop the model, numerous LER and CER studies were surveyed and modified when required. The result of the research along with components of the LERCER program are reported.
Stochastic differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sobczyk, K.
1990-01-01
This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshoremore » structures.« less
Numerical analysis of single and multiple particles of Belchatow lignite dried in superheated steam
NASA Astrophysics Data System (ADS)
Zakrzewski, Marcin; Sciazko, Anna; Komatsu, Yosuke; Akiyama, Taro; Hashimoto, Akira; Kaneko, Shozo; Kimijima, Shinji; Szmyd, Janusz S.; Kobayashi, Yoshinori
2018-03-01
Low production costs have contributed to the important role of lignite in the energy mixes of numerous countries worldwide. High moisture content, though, diminishes the applicability of lignite in power generation. Superheated steam drying is a prospective method of raising the calorific value of this fuel. This study describes the numerical model of superheated steam drying of lignite from the Belchatow mine in Poland in two aspects: single and multi-particle. The experimental investigation preceded the numerical analysis and provided the necessary data for the preparation and verification of the model. Spheres of 2.5 to 30 mm in diameter were exposed to the drying medium at the temperature range of 110 to 170 °C. The drying kinetics were described in the form of moisture content, drying rate and temperature profile curves against time. Basic coal properties, such as density or specific heat, as well as the mechanisms of heat and mass transfer in the particular stages of the process laid the foundations for the model construction. The model illustrated the drying behavior of a single particle in the entire range of steam temperature as well as the sample diameter. Furthermore, the numerical analyses of coal batches containing particles of various sizes were conducted to reflect the operating conditions of the dryer. They were followed by deliberation on the calorific value improvement achieved by drying, in terms of coal ingredients, power plant efficiency and dryer input composition. The initial period of drying was found crucial for upgrading the quality of coal. The accuracy of the model is capable of further improvement regarding the process parameters.
NASA Astrophysics Data System (ADS)
Goyal, M.; Chakravarty, A.; Atrey, M. D.
2017-02-01
Performance of modern helium refrigeration/ liquefaction systems depends significantly on the effectiveness of heat exchangers. Generally, compact plate fin heat exchangers (PFHE) having very high effectiveness (>0.95) are used in such systems. Apart from basic fluid film resistances, various secondary parameters influence the sizing/ rating of these heat exchangers. In the present paper, sizing calculations are performed, using in-house developed numerical models/ codes, for a set of high effectiveness PFHE for a modified Claude cycle based helium liquefier/ refrigerator operating in the refrigeration mode without liquid nitrogen (LN2) pre-cooling. The combined effects of secondary parameters like axial heat conduction through the heat exchanger metal matrix, parasitic heat in-leak from surroundings and variation in the fluid/ metal properties are taken care of in the sizing calculation. Numerical studies are carried out to predict the off-design performance of the PFHEs in the refrigeration mode with LN2 pre-cooling. Iterative process cycle calculations are also carried out to obtain the inlet/ exit state points of the heat exchangers.
The calculating brain: an fMRI study.
Rickard, T C; Romero, S G; Basso, G; Wharton, C; Flitman, S; Grafman, J
2000-01-01
To explore brain areas involved in basic numerical computation, functional magnetic imaging (fMRI) scanning was performed on college students during performance of three tasks; simple arithmetic, numerical magnitude judgment, and a perceptual-motor control task. For the arithmetic relative to the other tasks, results for all eight subjects revealed bilateral activation in Brodmann's area 44, in dorsolateral prefrontal cortex (areas 9 and 10), in inferior and superior parietal areas, and in lingual and fusiform gyri. Activation was stronger on the left for all subjects, but only at Brodmann's area 44 and the parietal cortices. No activation was observed in the arithmetic task in several other areas previously implicated for arithmetic, including the angular and supramarginal gyri and the basal ganglia. In fact, angular and supramarginal gyri were significantly deactivated by the verification task relative to both the magnitude judgment and control tasks for every subject. Areas activated by the magnitude task relative to the control were more variable, but in five subjects included bilateral inferior parietal cortex. These results confirm some existing hypotheses regarding the neural basis of numerical processes, invite revision of others, and suggest productive lines for future investigation.
Large calculation of the flow over a hypersonic vehicle using a GPU
NASA Astrophysics Data System (ADS)
Elsen, Erich; LeGresley, Patrick; Darve, Eric
2008-12-01
Graphics processing units are capable of impressive computing performance up to 518 Gflops peak performance. Various groups have been using these processors for general purpose computing; most efforts have focussed on demonstrating relatively basic calculations, e.g. numerical linear algebra, or physical simulations for visualization purposes with limited accuracy. This paper describes the simulation of a hypersonic vehicle configuration with detailed geometry and accurate boundary conditions using the compressible Euler equations. To the authors' knowledge, this is the most sophisticated calculation of this kind in terms of complexity of the geometry, the physical model, the numerical methods employed, and the accuracy of the solution. The Navier-Stokes Stanford University Solver (NSSUS) was used for this purpose. NSSUS is a multi-block structured code with a provably stable and accurate numerical discretization which uses a vertex-based finite-difference method. A multi-grid scheme is used to accelerate the solution of the system. Based on a comparison of the Intel Core 2 Duo and NVIDIA 8800GTX, speed-ups of over 40× were demonstrated for simple test geometries and 20× for complex geometries.
Basic numerical capacities and prevalence of developmental dyscalculia: the Havana Survey.
Reigosa-Crespo, Vivian; Valdés-Sosa, Mitchell; Butterworth, Brian; Estévez, Nancy; Rodríguez, Marisol; Santos, Elsa; Torres, Paul; Suárez, Ramón; Lage, Agustín
2012-01-01
The association of enumeration and number comparison capacities with arithmetical competence was examined in a large sample of children from 2nd to 9th grades. It was found that efficiency on numerical capacities predicted separately more than 25% of the variance in the individual differences on a timed arithmetical test, and this occurred for both younger and older learners. These capacities were also significant predictors of individual variations in an untimed curriculum-based math achievement test and on the teacher scores of math performance over developmental time. Based on these findings, these numerical capacities were used for estimating the prevalence and gender ratio of basic numerical deficits and developmental dyscalculia (DD) over the grade range defined above (N = 11,652 children). The extent to which DD affects the population with poor ability on calculation was also examined. For this purpose, the prevalence and gender ratio of arithmetical dysfluency (AD) were estimated in the same cohort. The estimated prevalence of DD was 3.4%, and the male:female ratio was 4:1. However, the prevalence of AD was almost 3 times as high (9.35%), and no gender differences were found (male:female ratio = 1.07:1). Basic numerical deficits affect 4.54% of school-age population and affect more boys than girls (2.4:1). The differences between the corresponding estimates were highly significant (α < .01). Based on these contrastive findings, it is concluded that DD, defined as a defective sense of numerosity, could be a distinctive disorder that affects only a portion of children with AD.
Arkansas' Curriculum Guide. Competency Based Typewriting.
ERIC Educational Resources Information Center
Arkansas State Dept. of Education, Little Rock. Div. of Vocational, Technical and Adult Education.
This guide contains the essential parts of a total curriculum for a one-year typewriting course at the secondary school level. Addressed in the individual units of the guide are the following topics: alphabetic keyboarding, numeric keyboarding, basic symbol keyboarding, skill development, problem typewriting, ten-key numeric pads, production…
Research in applied mathematics, numerical analysis, and computer science
NASA Technical Reports Server (NTRS)
1984-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tai, Lin-Ru; Chou, Chang-Wei; Lee, I-Fang
In this study, we used a multiple copy (EGFP){sub 3} reporter system to establish a numeric nuclear index system to assess the degree of nuclear import. The system was first validated by a FRAP assay, and then was applied to evaluate the essential and multifaceted nature of basic amino acid clusters during the nuclear import of ribosomal protein L7. The results indicate that the sequence context of the basic cluster determines the degree of nuclear import, and that the number of basic residues in the cluster is irrelevant; rather the position of the pertinent basic residues is crucial. Moreover, itmore » also found that the type of carrier protein used by basic cluster has a great impact on the degree of nuclear import. In case of L7, importin β2 or importin β3 are preferentially used by clusters with a high import efficiency, notwithstanding that other importins are also used by clusters with a weaker level of nuclear import. Such a preferential usage of multiple basic clusters and importins to gain nuclear entry would seem to be a common practice among ribosomal proteins in order to ensure their full participation in high rate ribosome synthesis. - Highlights: ► We introduce a numeric index system that represents the degree of nuclear import. ► The rate of nuclear import is dictated by the sequence context of the basic cluster. ► Importin β2 and β3 were mainly responsible for the N4 mediated nuclear import.« less
Research in progress in applied mathematics, numerical analysis, and computer science
NASA Technical Reports Server (NTRS)
1990-01-01
Research conducted at the Institute in Science and Engineering in applied mathematics, numerical analysis, and computer science is summarized. The Institute conducts unclassified basic research in applied mathematics in order to extend and improve problem solving capabilities in science and engineering, particularly in aeronautics and space.
Teaching BASIC. A Step by Step Guide.
ERIC Educational Resources Information Center
Allen, M. F.
This three-chapter guide provides simple explanations about BASIC programming for a teacher to use in a classroom situation, and suggests procedures for a "hands-on" course. Numerous examples are presented of the questions, problems, and level of understanding to expect from first-time, adult users (ages 13 and up). The course materials…
ERIC Educational Resources Information Center
Deshler, Donald D.; Tollefson, Julie M.
2006-01-01
Despite numerous successes achieved by American schools in recent years, one of the remaining challenges is the large number of adolescents who lack basic literacy skills. Nearly 25 percent of 8th and 12th graders score below the basic level in reading on the National Assessment of Educational Progress and only 70 percent of all high school…
Motivational Factors for Participating in Basic Instruction Programs
ERIC Educational Resources Information Center
Hardin, Robin; Andrew, Damon P. S.; Koo, Gi-Yong; Bemiller, Jim
2009-01-01
Enrollment trends in Basic Instruction Programs (BIPs) have shown a gradual decrease during the past four decades. This trend is significant because of the numerous studies that have declared Americans as unfit, inactive and leading unhealthy lifestyles. College and university BIPs are a means in which adults can be introduced to healthy…
Comments of statistical issue in numerical modeling for underground nuclear test monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, W.L.; Anderson, K.K.
1993-03-01
The Symposium concluded with prepared summaries by four experts in the involved disciplines. These experts made no mention of statistics and/or the statistical content of issues. The first author contributed an extemporaneous statement at the Symposium because there are important issues associated with conducting and evaluating numerical modeling that are familiar to statisticians and often treated successfully by them. This note expands upon these extemporaneous remarks. Statistical ideas may be helpful in resolving some numerical modeling issues. Specifically, we comment first on the role of statistical design/analysis in the quantification process to answer the question ``what do we know aboutmore » the numerical modeling of underground nuclear tests?`` and second on the peculiar nature of uncertainty analysis for situations involving numerical modeling. The simulations described in the workshop, though associated with topic areas, were basically sets of examples. Each simulation was tuned towards agreeing with either empirical evidence or an expert`s opinion of what empirical evidence would be. While the discussions were reasonable, whether the embellishments were correct or a forced fitting of reality is unclear and illustrates that ``simulation is easy.`` We also suggest that these examples of simulation are typical and the questions concerning the legitimacy and the role of knowing the reality are fair, in general, with respect to simulation. The answers will help us understand why ``prediction is difficult.``« less
Numerical analysis of tailored sheets to improve the quality of components made by SPIF
NASA Astrophysics Data System (ADS)
Gagliardi, Francesco; Ambrogio, Giuseppina; Cozza, Anna; Pulice, Diego; Filice, Luigino
2018-05-01
In this paper, the authors pointed out a study on the profitable combination of forming techniques. More in detail, the attention has been put on the combination of the single point incremental forming (SPIF) and, generally, speaking, of an additional process that can lead to a material thickening on the initial blank considering the local thinning which the sheets undergo at. Focalizing the attention of the research on the excessive thinning of parts made by SPIF, a hybrid approach can be thought as a viable solution to reduce the not homogeneous thickness distribution of the sheet. In fact, the basic idea is to work on a blank previously modified by a deformation step performed, for instance, by forming, additive or subtractive processes. To evaluate the effectiveness of this hybrid solution, a FE numerical model has been defined to analyze the thickness variation on tailored sheets incrementally formed optimizing the material distribution according to the shape to be manufactured. Simulations based on the explicit formulation have been set up for the model implementation. The mechanical properties of the sheet material have been taken in literature and a frustum of cone as benchmark profile has been considered for the performed analysis. The outcomes of numerical model have been evaluated in terms of both maximum thinning and final thickness distribution. The feasibility of the proposed approach will be deeply detailed in the paper.
Massa, P T; Szuchet, S; Mugnaini, E
1984-12-01
Oligodendrocytes were isolated from lamb brain. Freshly isolated cells and cultured cells, either 1- to 4-day-old unattached or 1- to 5-week-old attached, were examined by thin section and freeze-fracture electron microscopy. Freeze-fracture of freshly isolated oligodendrocytes showed globular and elongated intramembrane particles similar to those previously described in oligodendrocytes in situ. Enrichment of these particles was seen at sites of inter-oligodendrocyte contact. Numerous gap junctions and scattered linear tight junctional arrays were apparent. Gap junctions were connected to blebs of astrocytic plasma membrane sheared off during isolation, whereas tight junctions were facing extracellular space or blebs of oligodendrocytic plasma membrane. Thin sections of cultured, unattached oligodendrocytes showed rounded cell bodies touching one another at points without forming specialized cell junctions. Cells plated on polylysine-coated aclar dishes attached, emanated numerous, pleomorphic processes, and expressed galactocerebroside and myelin basic protein, characteristic markers for oligodendrocytes. Thin sections showed typical oligodendrocyte ultrastructure but also intermediate filaments not present in unattached cultures. Freeze-fracture showed intramembrane particles similar to but more numerous, and with a different fracture face repartition, than those seen in oligodendrocytes, freshly isolated or in situ. Gap junctions were small and rare. Apposed oligodendrocyte plasma membrane formed linear tight junctions which became more numerous with time in culture. Thus, cultured oligodendrocytes isolated from ovine brains develop and maintain features characteristic of mature oligodendrocytes in situ and can be used to explore formation and maintenance of tight junctions and possibly other classes of cell-cell interactions important in the process of myelination.
Kashani, Alireza G; Olsen, Michael J; Parrish, Christopher E; Wilson, Nicholas
2015-11-06
In addition to precise 3D coordinates, most light detection and ranging (LIDAR) systems also record "intensity", loosely defined as the strength of the backscattered echo for each measured point. To date, LIDAR intensity data have proven beneficial in a wide range of applications because they are related to surface parameters, such as reflectance. While numerous procedures have been introduced in the scientific literature, and even commercial software, to enhance the utility of intensity data through a variety of "normalization", "correction", or "calibration" techniques, the current situation is complicated by a lack of standardization, as well as confusing, inconsistent use of terminology. In this paper, we first provide an overview of basic principles of LIDAR intensity measurements and applications utilizing intensity information from terrestrial, airborne topographic, and airborne bathymetric LIDAR. Next, we review effective parameters on intensity measurements, basic theory, and current intensity processing methods. We define terminology adopted from the most commonly-used conventions based on a review of current literature. Finally, we identify topics in need of further research. Ultimately, the presented information helps lay the foundation for future standards and specifications for LIDAR radiometric calibration.
Kashani, Alireza G.; Olsen, Michael J.; Parrish, Christopher E.; Wilson, Nicholas
2015-01-01
In addition to precise 3D coordinates, most light detection and ranging (LIDAR) systems also record “intensity”, loosely defined as the strength of the backscattered echo for each measured point. To date, LIDAR intensity data have proven beneficial in a wide range of applications because they are related to surface parameters, such as reflectance. While numerous procedures have been introduced in the scientific literature, and even commercial software, to enhance the utility of intensity data through a variety of “normalization”, “correction”, or “calibration” techniques, the current situation is complicated by a lack of standardization, as well as confusing, inconsistent use of terminology. In this paper, we first provide an overview of basic principles of LIDAR intensity measurements and applications utilizing intensity information from terrestrial, airborne topographic, and airborne bathymetric LIDAR. Next, we review effective parameters on intensity measurements, basic theory, and current intensity processing methods. We define terminology adopted from the most commonly-used conventions based on a review of current literature. Finally, we identify topics in need of further research. Ultimately, the presented information helps lay the foundation for future standards and specifications for LIDAR radiometric calibration. PMID:26561813
Teaching Mathematical Modelling for Earth Sciences via Case Studies
NASA Astrophysics Data System (ADS)
Yang, Xin-She
2010-05-01
Mathematical modelling is becoming crucially important for earth sciences because the modelling of complex systems such as geological, geophysical and environmental processes requires mathematical analysis, numerical methods and computer programming. However, a substantial fraction of earth science undergraduates and graduates may not have sufficient skills in mathematical modelling, which is due to either limited mathematical training or lack of appropriate mathematical textbooks for self-study. In this paper, we described a detailed case-study-based approach for teaching mathematical modelling. We illustrate how essential mathematical skills can be developed for students with limited training in secondary mathematics so that they are confident in dealing with real-world mathematical modelling at university level. We have chosen various topics such as Airy isostasy, greenhouse effect, sedimentation and Stokes' flow,free-air and Bouguer gravity, Brownian motion, rain-drop dynamics, impact cratering, heat conduction and cooling of the lithosphere as case studies; and we use these step-by-step case studies to teach exponentials, logarithms, spherical geometry, basic calculus, complex numbers, Fourier transforms, ordinary differential equations, vectors and matrix algebra, partial differential equations, geostatistics and basic numeric methods. Implications for teaching university mathematics for earth scientists for tomorrow's classroom will also be discussed. Refereces 1) D. L. Turcotte and G. Schubert, Geodynamics, 2nd Edition, Cambridge University Press, (2002). 2) X. S. Yang, Introductory Mathematics for Earth Scientists, Dunedin Academic Press, (2009).
A Comparison of Five Numerical Weather Prediction Analysis Climatologies in Southern High Latitudes.
NASA Astrophysics Data System (ADS)
Connolley, William M.; Harangozo, Stephen A.
2001-01-01
In this paper, numerical weather prediction analyses from four major centers are compared-the Australian Bureau of Meteorology (ABM), the European Centre for Medium-Range Weather Forecasts (ECMWF), the U.S. National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR), and The Met. Office (UKMO). Two of the series-ECMWF reanalysis (ERA) and NCEP-NCAR reanalysis (NNR)-are `reanalyses'; that is, the data have recently been processed through a consistent, modern analysis system. The other three-ABM, ECMWF operational (EOP), and UKMO-are archived from operational analyses.The primary focus in this paper is on the period of 1979-93, the period used for the reanalyses, and on climatology. However, ABM and NNR are also compared for the period before 1979, for which the evidence tends to favor NNR. The authors are concerned with basic variables-mean sea level pressure, height of the 500-hPa surface, and near-surface temperature-that are available from the basic analysis step, rather than more derived quantities (such as precipitation), which are available only from the forecast step.Direct comparisons against station observations, intercomparisons of the spatial pattern of the analyses, and intercomparisons of the temporal variation indicate that ERA, EOP, and UKMO are best for sea level pressure;that UKMO and EOP are best for 500-hPa height; and that none of the analyses perform well for near-surface temperature.
Numerical investigation of the flow inside the combustion chamber of a plant oil stove
NASA Astrophysics Data System (ADS)
Pritz, B.; Werler, M.; Wirbser, H.; Gabi, M.
2013-10-01
Recently a low cost cooking device for developing and emerging countries was developed at KIT in cooperation with the company Bosch und Siemens Hausgeräte GmbH. After constructing an innovative basic design further development was required. Numerical investigations were conducted in order to investigate the flow inside the combustion chamber of the stove under variation of different geometrical parameters. Beyond the performance improvement a further reason of the investigations was to rate the effects of manufacturing tolerance problems. In this paper the numerical investigation of a plant oil stove by means of RANS simulation will be presented. In order to reduce the computational costs different model reduction steps were necessary. The simulation results of the basic configuration compare very well with experimental measurements and problematic behaviors of the actual stove design could be explained by the investigation.
Multi-level adaptive finite element methods. 1: Variation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1979-01-01
A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.
NASA Astrophysics Data System (ADS)
Maksimov, Vyacheslav I.; Nagornova, Tatiana A.; Glazyrin, Viktor P.; Shestakov, Igor A.
2016-02-01
Is numerically investigated the process of convective heat transfer in the reservoirs of liquefied natural gas (LNG). The regimes of natural convection in a closed rectangular region with different intensity of heat exchange at the external borders are investigated. Is solved the time-dependent system of energy and Navier-Stokes equations in the dimensionless variables "vorticity - the stream function". Are obtained distributions of the hydrodynamic parameters and temperatures, that characterize basic regularities of the processes. The special features of the formation of circulation flows are isolated and the analysis of the temperature distribution in the solution region is carried out. Is shown the influence of geometric characteristics and intensity of heat exchange on the outer boundaries of reservoir on the temperature field in the LNG storage.
Identification of dynamic systems, theory and formulation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1985-01-01
The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.
Semiannual report, 1 April - 30 September 1991
NASA Technical Reports Server (NTRS)
1991-01-01
The major categories of the current Institute for Computer Applications in Science and Engineering (ICASE) research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification problems, with emphasis on effective numerical methods; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software for parallel computers. Research in these areas is discussed.
Iuculano, Teresa; Cohen Kadosh, Roi
2014-01-01
Nearly 7% of the population exhibit difficulties in dealing with numbers and performing arithmetic, a condition named Developmental Dyscalculia (DD), which significantly affects the educational and professional outcomes of these individuals, as it often persists into adulthood. Research has mainly focused on behavioral rehabilitation, while little is known about performance changes and neuroplasticity induced by the concurrent application of brain-behavioral approaches. It has been shown that numerical proficiency can be enhanced by applying a small-yet constant-current through the brain, a non-invasive technique named transcranial electrical stimulation (tES). Here we combined a numerical learning paradigm with transcranial direct current stimulation (tDCS) in two adults with DD to assess the potential benefits of this methodology to remediate their numerical difficulties. Subjects learned to associate artificial symbols to numerical quantities within the context of a trial and error paradigm, while tDCS was applied to the posterior parietal cortex (PPC). The first subject (DD1) received anodal stimulation to the right PPC and cathodal stimulation to the left PPC, which has been associated with numerical performance's improvements in healthy subjects. The second subject (DD2) received anodal stimulation to the left PPC and cathodal stimulation to the right PPC, which has been shown to impair numerical performance in healthy subjects. We examined two indices of numerical proficiency: (i) automaticity of number processing; and (ii) mapping of numbers onto space. Our results are opposite to previous findings with non-dyscalculic subjects. Only anodal stimulation to the left PPC improved both indices of numerical proficiency. These initial results represent an important step to inform the rehabilitation of developmental learning disabilities, and have relevant applications for basic and applied research in cognitive neuroscience, rehabilitation, and education.
Iuculano, Teresa; Cohen Kadosh, Roi
2014-01-01
Nearly 7% of the population exhibit difficulties in dealing with numbers and performing arithmetic, a condition named Developmental Dyscalculia (DD), which significantly affects the educational and professional outcomes of these individuals, as it often persists into adulthood. Research has mainly focused on behavioral rehabilitation, while little is known about performance changes and neuroplasticity induced by the concurrent application of brain-behavioral approaches. It has been shown that numerical proficiency can be enhanced by applying a small—yet constant—current through the brain, a non-invasive technique named transcranial electrical stimulation (tES). Here we combined a numerical learning paradigm with transcranial direct current stimulation (tDCS) in two adults with DD to assess the potential benefits of this methodology to remediate their numerical difficulties. Subjects learned to associate artificial symbols to numerical quantities within the context of a trial and error paradigm, while tDCS was applied to the posterior parietal cortex (PPC). The first subject (DD1) received anodal stimulation to the right PPC and cathodal stimulation to the left PPC, which has been associated with numerical performance's improvements in healthy subjects. The second subject (DD2) received anodal stimulation to the left PPC and cathodal stimulation to the right PPC, which has been shown to impair numerical performance in healthy subjects. We examined two indices of numerical proficiency: (i) automaticity of number processing; and (ii) mapping of numbers onto space. Our results are opposite to previous findings with non-dyscalculic subjects. Only anodal stimulation to the left PPC improved both indices of numerical proficiency. These initial results represent an important step to inform the rehabilitation of developmental learning disabilities, and have relevant applications for basic and applied research in cognitive neuroscience, rehabilitation, and education. PMID:24570659
Numerical Weather Prediction Models on Linux Boxes as tools in meteorological education in Hungary
NASA Astrophysics Data System (ADS)
Gyongyosi, A. Z.; Andre, K.; Salavec, P.; Horanyi, A.; Szepszo, G.; Mille, M.; Tasnadi, P.; Weidiger, T.
2012-04-01
Education of Meteorologist in Hungary - according to the Bologna Process - has three stages: BSc, MSc and PhD, and students graduating at each stage get the respective degree (BSc, MSc and PhD). The three year long base BSc course in Meteorology can be chosen by undergraduate students in the fields of Geosciences, Environmental Sciences and Physics. BasicsFundamentals in Mathematics (Calculus), Physics (General and Theoretical) Physics and Informatics are emphasized during their elementary education. The two year long MSc course - in which about 15 to 25 students are admitted each year - can be studied only at our the Eötvös Loránd uUniversity in the our country. Our aim is to give a basic education in all fields of Meteorology. Main topics are: Climatology, Atmospheric Physics, Atmospheric Chemistry, Dynamic and Synoptic Meteorology, Numerical Weather Prediction, modeling Modeling of surfaceSurface-atmosphere Iinteractions and Cclimate change. Education is performed in two branches: Climate Researcher and Forecaster. Education of Meteorologist in Hungary - according to the Bologna Process - has three stages: BSc, MSc and PhD, and students graduating at each stage get the respective degree. The three year long BSc course in Meteorology can be chosen by undergraduate students in the fields of Geosciences, Environmental Sciences and Physics. Fundamentals in Mathematics (Calculus), (General and Theoretical) Physics and Informatics are emphasized during their elementary education. The two year long MSc course - in which about 15 to 25 students are admitted each year - can be studied only at the Eötvös Loránd University in our country. Our aim is to give a basic education in all fields of Meteorology: Climatology, Atmospheric Physics, Atmospheric Chemistry, Dynamic and Synoptic Meteorology, Numerical Weather Prediction, Modeling of Surface-atmosphere Interactions and Climate change. Education is performed in two branches: Climate Researcher and Forecaster. Numerical modeling became a common tool in the daily practice of weather experts forecasters due to the i) increasing user demands for weather data by the costumers, ii) the growth in computer resources, iii) numerical weather prediction systems available for integration on affordable, off the shelf computers and iv) available input data (from ECMWF or NCEP) for model integrations. Beside learning the theoretical basis, since the last year. Students in their MSc or BSc Thesis Research or in Student's Research ProjectsStudent's Research Projects h have the opportunity to run numerical models and to analyze the outputs for different purposes including wind energy estimation, simulation of the dynamics of a polar low, and subtropical cyclones, analysis of the isentropic potential vorticity field, examination of coupled atmospheric dispersion models, etc. A special course in the application of numerical modeling has been held (is being announced for the upcoming semester) (is being announced for the upcoming semester) for our students in order to improve their skills on this field. Several numerical model (NRIPR ETA and WRF) systems have been adapted in the University and integrated WRF have been tested and used for the geographical region of the Carpathian Basin (NRIPR, ETA and WRF). Recently ALADIN/CHAPEAU the academic version of the ARPEGE ALADIN cy33t1 meso-scale numerical weather prediction model system (which is the operational forecasting tool of our National Weather Service) has been installed at our Institute. ALADIN is the operational forecasting model of the Hungarian Meteorological Service and developed in the framework of the international ALADIN co-operation. Our main objectives are i) the analysis of different typical weather situations, ii) fine tuning of parameterization schemes and the iii) comparison of the ALADIN/CHAPEAU and WRF model outputs based on case studies. The necessary hardware and software innovations has have been done. In the presentation the computer resources needed for the integration of both WRF and ALADIN/CHAPEAU models will be briefly described. The software developments performed for the evaluation and comparison of the different modeling systems will be demonstrated. The main objectives of the education program on the practical numerical weather modeling will be introduced, as well as its detailed thematics and the structure of the labs.
Reengineering a database for clinical trials management: lessons for system architects.
Brandt, C A; Nadkarni, P; Marenco, L; Karras, B T; Lu, C; Schacter, L; Fisk, J M; Miller, P L
2000-10-01
This paper describes the process of enhancing Trial/DB, a database system for clinical studies management. The system's enhancements have been driven by the need to maximize the effectiveness of developer personnel in supporting numerous and diverse users, of study designers in setting up new studies, and of administrators in managing ongoing studies. Trial/DB was originally designed to work over a local area network within a single institution, and basic architectural changes were necessary to make it work over the Internet efficiently as well as securely. Further, as its use spread to diverse communities of users, changes were made to let the processes of study design and project management adapt to the working styles of the principal investigators and administrators for each study. The lessons learned in the process should prove instructive for system architects as well as managers of electronic patient record systems.
NASA Astrophysics Data System (ADS)
Roslyakov, P. V.; Proskurin, Y. V.; Khokhlov, D. A.; Zaichenko, M. N.
2018-03-01
The aim of this work is to research operations of modern combined low-emission swirl burner with a capacity of 2.2 MW for fire-tube boiler type KV-GM-2.0, to ensure the effective burning of natural gas, crude oil and diesel fuel. For this purpose, a computer model of the burner and furnace chamber has been developed. The paper presents the results of numerical investigations of the burner operation, using the example of natural gas in a working load range from 40 to 100%. The basic features of processes of fuel burning in the cramped conditions of the flame tube have been identified to fundamentally differ from similar processes in the furnaces of steam boilers. The influence of the design of burners and their operating modes on incomplete combustion of fuel and the formation of nitrogen oxides has been determined.
Kuprijanov, A; Gnoth, S; Simutis, R; Lübbert, A
2009-02-01
Design and experimental validation of advanced pO(2) controllers for fermentation processes operated in the fed-batch mode are described. In most situations, the presented controllers are able to keep the pO(2) in fermentations for recombinant protein productions exactly on the desired value. The controllers are based on the gain-scheduling approach to parameter-adaptive proportional-integral controllers. In order to cope with the most often appearing distortions, the basic gain-scheduling feedback controller was complemented with a feedforward control component. This feedforward/feedback controller significantly improved pO(2) control. By means of numerical simulations, the controller behavior was tested and its parameters were determined. Validation runs were performed with three Escherichia coli strains producing different recombinant proteins. It is finally shown that the new controller leads to significant improvements in the signal-to-noise ratio of other key process variables and, thus, to a higher process quality.
NASA Astrophysics Data System (ADS)
Weidinger, Peter; Günther, Kay; Fitzel, Martin; Logvinov, Ruslan; Ilin, Alexander; Ploshikhin, Vasily; Hugger, Florian; Mann, Vincent; Roth, Stephan; Schmidt, Michael
The necessity for weight reduction in motor vehicles in order to save fuel consumption pushes automotive suppliers to use materials of higher strength. Due to their excellent crash behavior high strength steels are increasingly applied in various structures. In this paper some predevelopment steps for a material change from a micro alloyed to dual phase and complex phase steels of a T-joint assembly are displayed. Initially the general weldability of the materials regarding pore formation, hardening in the heat affected zone and hot cracking susceptibility is discussed. After this basic investigation, the computer aided design optimization of a clamping device is shown, in which influences of the clamping jaw, the welding position and the clamping forces upon weld quality are presented. Finally experimental results of the welding process are displayed, which validate the numerical simulation.
Hong, T; Manqi, T
1980-04-01
The proton transport across biological membrane, accompanied by energy transformation, is closely related with many basic processes involved in the maintenance of life. Active researches are carried out in this field, but so far we have not known a complete calculation. This paper presents a model of an open and closed photon-controlled ion pore with a quantitative analysis of the irreversible process of the proton transport across the purple membrane. Upon absorbing photon by the purple membrane, the deprotonation of the Schiff base causes the ion pore to open, but it will close when it returns to bR570. A set of nonlinear differential equations describing this model is given. The stability of the equations is discussed. The results of the numerical calculation for steady state are found in good agreement with the experimental data of Bakker.
Neuroart: picturing the neuroscience of intentional actions in art and science.
Siler, Todd
2015-01-01
Intentional actions cover a broad spectrum of human behaviors involving consciousness, creativity, innovative thinking, problem-solving, critical thinking, and other related cognitive processes self-evident in the arts and sciences. The author discusses the brain activity associated with action intentions, connecting this activity with the creative process. Focusing on one seminal artwork created and exhibited over a period of three decades, Thought Assemblies (1979-82, 2014), he describes how this symbolic art interprets the neuropsychological processes of intuition and analytical reasoning. It explores numerous basic questions concerning observed interactions between artistic and scientific inquiries, conceptions, perceptions, and representations connecting mind and nature. Pointing to some key neural mechanisms responsible for forming and implementing intentions, he considers why and how we create, discover, invent, and innovate. He suggests ways of metaphorical thinking and symbolic modeling that can help integrate the neuroscience of intentional actions with the neuroscience of creativity, art and neuroaesthetics.
Neuroart: picturing the neuroscience of intentional actions in art and science
Siler, Todd
2015-01-01
Intentional actions cover a broad spectrum of human behaviors involving consciousness, creativity, innovative thinking, problem-solving, critical thinking, and other related cognitive processes self-evident in the arts and sciences. The author discusses the brain activity associated with action intentions, connecting this activity with the creative process. Focusing on one seminal artwork created and exhibited over a period of three decades, Thought Assemblies (1979–82, 2014), he describes how this symbolic art interprets the neuropsychological processes of intuition and analytical reasoning. It explores numerous basic questions concerning observed interactions between artistic and scientific inquiries, conceptions, perceptions, and representations connecting mind and nature. Pointing to some key neural mechanisms responsible for forming and implementing intentions, he considers why and how we create, discover, invent, and innovate. He suggests ways of metaphorical thinking and symbolic modeling that can help integrate the neuroscience of intentional actions with the neuroscience of creativity, art and neuroaesthetics. PMID:26257629
On the validation of a code and a turbulence model appropriate to circulation control airfoils
NASA Technical Reports Server (NTRS)
Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.
1988-01-01
A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.
Sex Differences in the Spatial Representation of Number
ERIC Educational Resources Information Center
Bull, Rebecca; Cleland, Alexandra A.; Mitchell, Thomas
2013-01-01
There is a large body of accumulated evidence from behavioral and neuroimaging studies regarding how and where in the brain we represent basic numerical information. A number of these studies have considered how numerical representations may differ between individuals according to their age or level of mathematical ability, but one issue rarely…
Numerical Ordering Ability Mediates the Relation between Number-Sense and Arithmetic Competence
ERIC Educational Resources Information Center
Lyons, Ian M.; Beilock, Sian L.
2011-01-01
What predicts human mathematical competence? While detailed models of number representation in the brain have been developed, it remains to be seen exactly how basic number representations link to higher math abilities. We propose that representation of ordinal associations between numerical symbols is one important factor that underpins this…
48 CFR 204.7004 - Supplementary PII numbers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... agreements using a six position alpha-numeric added to the basic PII number. (2) Position 1. Identify the...) Positions 2 through 3. These are the first two digits in a serial number. They may be either alpha or... orders issued by the office issuing the contract or agreement. Use a four position alpha-numeric call or...
48 CFR 204.7004 - Supplementary PII numbers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... agreements using a six position alpha-numeric added to the basic PII number. (2) Position 1. Identify the...) Positions 2 through 3. These are the first two digits in a serial number. They may be either alpha or... orders issued by the office issuing the contract or agreement. Use a four position alpha-numeric call or...
48 CFR 204.7004 - Supplementary PII numbers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... agreements using a six position alpha-numeric added to the basic PII number. (2) Position 1. Identify the...) Positions 2 through 3. These are the first two digits in a serial number. They may be either alpha or... orders issued by the office issuing the contract or agreement. Use a four position alpha-numeric call or...
Inclusive Classrooms: A Basic Qualitative Study of K-8 Urban Charter School Teachers
ERIC Educational Resources Information Center
Williams, Regina N.
2017-01-01
The rapid growth of charter schools has been accompanied with numerous questions related to special education such as whether or not charter schools and their unique missions can actually meet the needs of students with disabilities (Karp, 2012). This basic qualitative study explores the practices and procedures used by primary school teachers to…
Using basic statistics on the individual patient's own numeric data.
Hart, John
2012-12-01
This theoretical report gives an example for how coefficient of variation (CV) and quartile analysis (QA) to assess outliers might be able to be used to analyze numeric data in practice for an individual patient. A patient was examined for 8 visits using infrared instrumentation for measurement of mastoid fossa temperature differential (MFTD) readings. The CV and QA were applied to the readings. The participant also completed the Short Form-12 health perception survey on each visit, and these findings were correlated with CV to determine if CV had outcomes support (clinical significance). An outlier MFTD reading was observed on the eighth visit according to QA that coincided with the largest CV value for the MFTDs. Correlations between the Short Form-12 and CV were low to negligible, positive, and statistically nonsignificant. This case provides an example of how basic statistical analyses could possibly be applied to numerical data in chiropractic practice for an individual patient. This might add objectivity to analyzing an individual patient's data in practice, particularly if clinical significance of a clinical numerical finding is unknown.
Spatial and numerical abilities without a complete natural language.
Hyde, Daniel C; Winkler-Rhoades, Nathan; Lee, Sang-Ah; Izard, Veronique; Shapiro, Kevin A; Spelke, Elizabeth S
2011-04-01
We studied the cognitive abilities of a 13-year-old deaf child, deprived of most linguistic input from late infancy, in a battery of tests designed to reveal the nature of numerical and geometrical abilities in the absence of a full linguistic system. Tests revealed widespread proficiency in basic symbolic and non-symbolic numerical computations involving the use of both exact and approximate numbers. Tests of spatial and geometrical abilities revealed an interesting patchwork of age-typical strengths and localized deficits. In particular, the child performed extremely well on navigation tasks involving geometrical or landmark information presented in isolation, but very poorly on otherwise similar tasks that required the combination of the two types of spatial information. Tests of number- and space-specific language revealed proficiency in the use of number words and deficits in the use of spatial terms. This case suggests that a full linguistic system is not necessary to reap the benefits of linguistic vocabulary on basic numerical tasks. Furthermore, it suggests that language plays an important role in the combination of mental representations of space. Copyright © 2010 Elsevier Ltd. All rights reserved.
Multistep integration formulas for the numerical integration of the satellite problem
NASA Technical Reports Server (NTRS)
Lundberg, J. B.; Tapley, B. D.
1981-01-01
The use of two Class 2/fixed mesh/fixed order/multistep integration packages of the PECE type for the numerical integration of the second order, nonlinear, ordinary differential equation of the satellite orbit problem. These two methods are referred to as the general and the second sum formulations. The derivation of the basic equations which characterize each formulation and the role of the basic equations in the PECE algorithm are discussed. Possible starting procedures are examined which may be used to supply the initial set of values required by the fixed mesh/multistep integrators. The results of the general and second sum integrators are compared to the results of various fixed step and variable step integrators.
Grid adaption based on modified anisotropic diffusion equations formulated in the parametic domain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagmeijer, R.
1994-11-01
A new grid-adaption algorithm for problems in computational fluid dynamics is presented. The basic equations are derived from a variational problem formulated in the parametric domain of the mapping that defines the existing grid. Modification of the basic equations provides desirable properties in boundary layers. The resulting modified anisotropic diffusion equations are solved for the computational coordinates as functions of the parametric coordinates and these functions are numerically inverted. Numerical examples show that the algorithm is robust, that shocks and boundary layers are well-resolved on the adapted grid, and that the flow solution becomes a globally smooth function of themore » computational coordinates.« less
Guidelines for developing vectorizable computer programs
NASA Technical Reports Server (NTRS)
Miner, E. W.
1982-01-01
Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.
A modeling analysis program for the JPL table mountain Io sodium cloud data
NASA Technical Reports Server (NTRS)
Smyth, William H.; Goldberg, Bruce A.
1988-01-01
Research in the third and final year of this project is divided into three main areas: (1) completion of data processing and calibration for 34 of the 1981 Region B/C images, selected from the massive JPL sodium cloud data set; (2) identification and examination of the basic features and observed changes in the morphological characteristics of the sodium cloud images; and (3) successful physical interpretation of these basic features and observed changes using the highly developed numerical sodium cloud model at AER. The modeling analysis has led to a number of definite conclusions regarding the local structure of Io's atmosphere, the gas escape mechanism at Io, and the presence of an east-west electric field and a System III longitudinal asymmetry in the plasma torus. Large scale stability, as well as some smaller scale time variability for both the sodium cloud and the structure of the plasma torus over a several year time period are also discussed.
BOREAS AFM-04 Twin Otter Aircraft Flux Data
NASA Technical Reports Server (NTRS)
MacPherson, J. Ian; Hall, Forrest G. (Editor); Knapp, David E. (Editor); Desjardins, Raymond L.; Smith, David E. (Technical Monitor)
2000-01-01
The BOREAS AFM-5 team collected and processed data from the numerous radiosonde flights during the project. The goals of the AFM-05 team were to provide large-scale definition of the atmosphere by supplementing the existing AES aerological network, both temporally and spatially. This data set includes basic upper-air parameters collected from the network of upper-air stations during the 1993, 1994, and 1996 field campaigns over the entire study region. The data are contained in tabular ASCII files. The data files are available on a CD-ROM (see document number 20010000884) or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).
GEH-4-42, 47; Hot pressed, I and E cooled fuel element irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neidner, R.
1959-11-02
In our continual effort to improve the present fuel elements which are irradiated in the numerous Hanford reactors, we have made what we believe to be a significant improvement in the hot pressing process for jacketing uranium fuel slugs. We are proposing a large scale evaluation testing program in the Hanford reactors but need the vital and basic information on the operating characteristics of this type slug under known and controlled operating conditions. We, therefore, have prepared two typical fuel slugs and will want them irradiated to about 1000 MWD/T exposure (this will require about four to five total cycles).
[The clinical history in surgical processes. Bioethical aspects and basic professional ethics].
Collazo Chao, Eliseo
2008-11-01
Surgeons are increasingly facing multiple civil liability claims from their patients. Against this background and taking any eventual liability claims into account, surgeons must be increasingly aware of the importance of maintaining patient medical histories, which raises numerous questions on the length of time and form of keeping them. Ethical and legal obligations need to be taken into account in order to identify the controversial aspects related to patients and their environment, as well as shedding light on the most appropriate behaviour in each case. We must never forget the case history is a clinical document, subjected to the medical art and medical ethics which regulate it.
Iontophoretic transdermal drug delivery: a multi-layered approach.
Pontrelli, Giuseppe; Lauricella, Marco; Ferreira, José A; Pena, Gonçalo
2017-12-11
We present a multi-layer mathematical model to describe the transdermal drug release from an iontophoretic system. The Nernst-Planck equation describes the basic convection-diffusion process, with the electric potential obtained by solving the Laplace's equation. These equations are complemented with suitable interface and boundary conditions in a multi-domain. The stability of the mathematical problem is discussed in different scenarios and a finite-difference method is used to solve the coupled system. Numerical experiments are included to illustrate the drug dynamics under different conditions. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
NASA Astrophysics Data System (ADS)
Jia, Xiaofei
2018-06-01
Starting from the basic equations describing the evolution of the carriers and photons inside a semiconductor optical amplifier (SOA), the equation governing pulse propagation in the SOA is derived. By employing homotopy analysis method (HAM), a series solution for the output pulse by the SOA is obtained, which can effectively characterize the temporal features of the nonlinear process during the pulse propagation inside the SOA. Moreover, the analytical solution is compared with numerical simulations with a good agreement. The theoretical results will benefit the future analysis of other problems related to the pulse propagation in the SOA.
Recent Observational Progress on Accretion Disks Around Compact Objects
NASA Astrophysics Data System (ADS)
Miller, Jon M.
2016-04-01
Studies of accretion disks around black holes and neutron stars over the last ten years have made remarkable progress. Our understanding of disk evolution as a function of mass accretion rate is pushing toward a consensus on thin/thick disk transitions; an apparent switching between disk-driven outflow modes has emerged; and monitoring observations have revealed complex spectral energy distributions wherein disk reprocessing must be important. Detailed studies of disk winds, in particular, have the potential to reveal the basic physical processes that mediate disk accretion, and to connect with numerical simulations. This talk will review these developments and look ahead to the potential of Astro-H.
A deterministic global optimization using smooth diagonal auxiliary functions
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.
2015-04-01
In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.
Methyl jasmonate as a vital substance in plants.
Cheong, Jong-Joo; Choi, Yang Do
2003-07-01
The plant floral scent methyl jasmonate (MeJA) has been identified as a vital cellular regulator that mediates diverse developmental processes and defense responses against biotic and abiotic stresses. The pleiotropic effects of MeJA have raised numerous questions about its regulation for biogenesis and mode of action. Characterization of the gene encoding jasmonic acid carboxyl methyltransferase has provided basic information on the role(s) of this phytohormone in gene-activation control and systemic long-distance signaling. Recent approaches using functional genomics and bioinformatics have identified a whole set of MeJA-responsive genes, and provide insights into how plants use volatile signals to withstand diverse and variable environments.
Ultrafast dynamics of photoexcited charge and spin currents in semiconductor nanostructures
NASA Astrophysics Data System (ADS)
Meier, Torsten; Pasenow, Bernhard; Duc, Huynh Thanh; Vu, Quang Tuyen; Haug, Hartmut; Koch, Stephan W.
2007-02-01
Employing the quantum interference among one- and two-photon excitations induced by ultrashort two-color laser pulses it is possible to generate charge and spin currents in semiconductors and semiconductor nanostructures on femtosecond time scales. Here, it is reviewed how the excitation process and the dynamics of such photocurrents can be described on the basis of a microscopic many-body theory. Numerical solutions of the semiconductor Bloch equations (SBE) provide a detailed description of the time-dependent material excitations. Applied to the case of photocurrents, numerical solutions of the SBE for a two-band model including many-body correlations on the second-Born Markov level predict an enhanced damping of the spin current relative to that of the charge current. Interesting effects are obtained when the scattering processes are computed beyond the Markovian limit. Whereas the overall decay of the currents is basically correctly described already within the Markov approximation, quantum-kinetic calculations show that memory effects may lead to additional oscillatory signatures in the current transients. When transitions to coupled heavy- and light-hole valence bands are incorporated into the SBE, additional charge and spin currents, which are not described by the two-band model, appear.
NASA Astrophysics Data System (ADS)
Srivastava, Y.; Srivastava, S.; Boriwal, L.
2016-09-01
Mechanical alloying is a novelistic solid state process that has received considerable attention due to many advantages over other conventional processes. In the present work, Co2FeAl healer alloy powder, prepared successfully from premix basic powders of Cobalt (Co), Iron (Fe) and Aluminum (Al) in stoichiometric of 60Co-26Fe-14Al (weight %) by novelistic mechano-chemical route. Magnetic properties of mechanically alloyed powders were characterized by vibrating sample magnetometer (VSM). 2 factor 5 level design matrix was applied to experiment process. Experimental results were used for response surface methodology. Interaction between the input process parameters and the response has been established with the help of regression analysis. Further analysis of variance technique was applied to check the adequacy of developed model and significance of process parameters. Test case study was performed with those parameters, which was not selected for main experimentation but range was same. Response surface methodology, the process parameters must be optimized to obtain improved magnetic properties. Further optimum process parameters were identified using numerical and graphical optimization techniques.
NASA Technical Reports Server (NTRS)
Fowlis, W. W. (Editor); Davis, M. H. (Editor)
1981-01-01
The atmospheric general circulation experiment (AGCE) numerical design for Spacelab flights was studied. A spherical baroclinic flow experiment which models the large scale circulations of the Earth's atmosphere was proposed. Gravity is simulated by a radial dielectric body force. The major objective of the AGCE is to study nonlinear baroclinic wave flows in spherical geometry. Numerical models must be developed which accurately predict the basic axisymmetric states and the stability of nonlinear baroclinic wave flows. A three dimensional, fully nonlinear, numerical model and the AGCE based on the complete set of equations is required. Progress in the AGCE numerical design studies program is reported.
Maio, Nunziata; Rouault, Tracey. A.
2014-01-01
Iron-sulfur (Fe-S) clusters are ancient, ubiquitous cofactors composed of iron and inorganic sulfur. The combination of the chemical reactivity of iron and sulfur, together with many variations of cluster composition, oxidation states and protein environments, enables Fe-S clusters to participate in numerous biological processes. Fe-S clusters are essential to redox catalysis in nitrogen fixation, mitochondrial respiration and photosynthesis, to regulatory sensing in key metabolic pathways (i. e. cellular iron homeostasis and oxidative stress response), and to the replication and maintenance of the nuclear genome. Fe-S cluster biogenesis is a multistep process that involves a complex sequence of catalyzed protein- protein interactions and coupled conformational changes between the components of several dedicated multimeric complexes. Intensive studies of the assembly process have clarified key points in the biogenesis of Fe-S proteins. However several critical questions still remain, such as: what is the role of frataxin? Why do some defects of Fe-S cluster biogenesis cause mitochondrial iron overload? How are specific Fe-S recipient proteins recognized in the process of Fe-S transfer? This review focuses on the basic steps of Fe-S cluster biogenesis, drawing attention to recent advances achieved on the identification of molecular features that guide selection of specific subsets of nascent Fe-S recipients by the cochaperone HSC20. Additionally, it outlines the distinctive phenotypes of human diseases due to mutations in the components of the basic pathway. PMID:25245479
Reis, Steven E.; Berglund, Lars; Bernard, Gordon R.; Califf, Robert M.; FitzGerald, Garret A.; Johnson, Peter C.
2009-01-01
Advances in human health require the efficient and rapid translation of scientific discoveries into effective clinical treatments; this process in turn depends upon observational data gathered from patients, communities, and public-health research that can be used to guide basic scientific investigation. Such bidirectional translational science, however, faces unprecedented challenges due to the rapid pace of scientific and technological development, as well as the difficulties of negotiating increasingly complex regulatory and commercial environments that overlap the research domain. Further, numerous barriers to translational science have emerged among the nation’s academic research centers, including basic structural and cultural impediments to innovation and collaboration, shortages of trained investigators, and inadequate funding. To address these serious and systemic problems, in 2006, the National Institutes of Health created the Clinical and Translational Science Awards (CTSA) program, which aims to catalyze the transformation of biomedical research at a national level, speeding the discovery and development of therapies, fostering collaboration, engaging communities, and training succeeding generations of clinical and translational researchers. The authors report in detail on the planning process, begun in 2008, that was used to engage stakeholders and to identify, refine, and ultimately implement the CTSA program’s overarching strategic goals. They also discuss the implications and likely impact of this strategic planning process as it is applied among the nation’s academic health centers. PMID:20182119
Reis, Steven E; Berglund, Lars; Bernard, Gordon R; Califf, Robert M; Fitzgerald, Garret A; Johnson, Peter C
2010-03-01
Advances in human health require the efficient and rapid translation of scientific discoveries into effective clinical treatments; this process, in turn, depends on observational data gathered from patients, communities, and public health research that can be used to guide basic scientific investigation. Such bidirectional translational science, however, faces unprecedented challenges due to the rapid pace of scientific and technological development, as well as the difficulties of negotiating increasingly complex regulatory and commercial environments that overlap the research domain. Further, numerous barriers to translational science have emerged among the nation's academic research centers, including basic structural and cultural impediments to innovation and collaboration, shortages of trained investigators, and inadequate funding.To address these serious and systemic problems, in 2006 the National Institutes of Health created the Clinical and Translational Science Awards (CTSA) program, which aims to catalyze the transformation of biomedical research at a national level, speeding the discovery and development of therapies, fostering collaboration, engaging communities, and training succeeding generations of clinical and translational researchers. The authors report in detail on the planning process, begun in 2008, that was used to engage stakeholders and to identify, refine, and ultimately implement the CTSA program's overarching strategic goals. They also discuss the implications and likely impact of this strategic planning process as it is applied among the nation's academic health centers.
Construction Upgrade. A Pack To Improve Communication, Numerical and IT Skills for NVQ.
ERIC Educational Resources Information Center
Rylands, Judy
This pack of materials is designed to help students working to improve their basic skills as part of their carpentry and joinery course. An introduction lists relevant core skills units and basic skills standards. The six individual sections of the pack are divided into task sheets and fact sheets. The fact sheets give information and teaching…
Planning & Priority Setting for Basic Research
2010-05-05
Integrated into numerous commercial codes in aerospace, automotive , semiconductor, and chemical industries Fast Multipole Methods (ONR 31) Applications... Use knowledge (even failures) to reduce risk in acquisition Provide the basis for future Navy and arine Corps syste s Ensure research...relevancy to Naval S&T strategy Transition pro ising Basic Research to applications Use kno ledge (even failures) to reduce risk in acquisition Maintain
ERIC Educational Resources Information Center
Anoka-Hennepin Technical Coll., Minneapolis, MN.
This workbook is intended for students taking a course in basic computer numerical control (CNC) operation that was developed during a project to retrain defense industry workers at risk of job loss or dislocation because of conversion of the defense industry. The workbook contains daily training guides for each of the course's 13 sessions. Among…
Basic Numerical Capacities and Prevalence of Developmental Dyscalculia: The Havana Survey
ERIC Educational Resources Information Center
Reigosa-Crespo, Vivian; Valdes-Sosa, Mitchell; Butterworth, Brian; Estevez, Nancy; Rodriguez, Marisol; Santos, Elsa; Torres, Paul; Suarez, Ramon; Lage, Agustin
2012-01-01
The association of enumeration and number comparison capacities with arithmetical competence was examined in a large sample of children from 2nd to 9th grades. It was found that efficiency on numerical capacities predicted separately more than 25% of the variance in the individual differences on a timed arithmetical test, and this occurred for…
Being Numerate: What Counts? A Fresh Look at the Basics.
ERIC Educational Resources Information Center
Willis, Sue, Ed.
To be numerate is to be able to function mathematically in one's daily life. The kinds of mathematics skills and understandings necessary to function effectively in daily life are changing. Despite an awareness in Australia of new skills necessary for the information age and calls that the schools should be instrumental in preparing students with…
Simple Numerical Simulation of Strain Measurement
NASA Technical Reports Server (NTRS)
Tai, H.
2002-01-01
By adopting the basic principle of the reflection (and transmission) of a plane polarized electromagnetic wave incident normal to a stack of films of alternating refractive index, a simple numerical code was written to simulate the maximum reflectivity (transmittivity) of a fiber optic Bragg grating corresponding to various non-uniform strain conditions including photo-elastic effect in certain cases.
The Evolution of Hyperedge Cardinalities and Bose-Einstein Condensation in Hypernetworks.
Guo, Jin-Li; Suo, Qi; Shen, Ai-Zhong; Forrest, Jeffrey
2016-09-27
To depict the complex relationship among nodes and the evolving process of a complex system, a Bose-Einstein hypernetwork is proposed in this paper. Based on two basic evolutionary mechanisms, growth and preference jumping, the distribution of hyperedge cardinalities is studied. The Poisson process theory is used to describe the arrival process of new node batches. And, by using the Poisson process theory and a continuity technique, the hypernetwork is analyzed and the characteristic equation of hyperedge cardinalities is obtained. Additionally, an analytical expression for the stationary average hyperedge cardinality distribution is derived by employing the characteristic equation, from which Bose-Einstein condensation in the hypernetwork is obtained. The theoretical analyses in this paper agree with the conducted numerical simulations. This is the first study on the hyperedge cardinality in hypernetworks, where Bose-Einstein condensation can be regarded as a special case of hypernetworks. Moreover, a condensation degree is also discussed with which Bose-Einstein condensation can be classified.
Neural correlates of math anxiety - an overview and implications.
Artemenko, Christina; Daroczy, Gabriella; Nuerk, Hans-Christoph
2015-01-01
Math anxiety is a common phenomenon which can have a negative impact on numerical and arithmetic performance. However, so far little is known about the underlying neurocognitive mechanisms. This mini review provides an overview of studies investigating the neural correlates of math anxiety which provide several hints regarding its influence on math performance: while behavioral studies mostly observe an influence of math anxiety on difficult math tasks, neurophysiological studies show that processing efficiency is already affected in basic number processing. Overall, the neurocognitive literature suggests that (i) math anxiety elicits emotion- and pain-related activation during and before math activities, (ii) that the negative emotional response to math anxiety impairs processing efficiency, and (iii) that math deficits triggered by math anxiety may be compensated for by modulating the cognitive control or emotional regulation network. However, activation differs strongly between studies, depending on tasks, paradigms, and samples. We conclude that neural correlates can help to understand and explore the processes underlying math anxiety, but the data are not very consistent yet.
Neural correlates of math anxiety – an overview and implications
Artemenko, Christina; Daroczy, Gabriella; Nuerk, Hans-Christoph
2015-01-01
Math anxiety is a common phenomenon which can have a negative impact on numerical and arithmetic performance. However, so far little is known about the underlying neurocognitive mechanisms. This mini review provides an overview of studies investigating the neural correlates of math anxiety which provide several hints regarding its influence on math performance: while behavioral studies mostly observe an influence of math anxiety on difficult math tasks, neurophysiological studies show that processing efficiency is already affected in basic number processing. Overall, the neurocognitive literature suggests that (i) math anxiety elicits emotion- and pain-related activation during and before math activities, (ii) that the negative emotional response to math anxiety impairs processing efficiency, and (iii) that math deficits triggered by math anxiety may be compensated for by modulating the cognitive control or emotional regulation network. However, activation differs strongly between studies, depending on tasks, paradigms, and samples. We conclude that neural correlates can help to understand and explore the processes underlying math anxiety, but the data are not very consistent yet. PMID:26388824
Mathematical and Numerical Techniques in Energy and Environmental Modeling
NASA Astrophysics Data System (ADS)
Chen, Z.; Ewing, R. E.
Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms
Distribution of diameters for Erdős-Rényi random graphs.
Hartmann, A K; Mézard, M
2018-03-01
We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c. The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P(d) numerically for various values of c, in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10^{-100} which allow us to obtain the distribution over basically the full range of the support, for graphs up to N=1000 nodes. For values c<1, our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c>1 the distribution is more complex and no complete analytical results are available. For this parameter range, P(d) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c, we determined the finite-size rate function Φ(d/N) and were able to extrapolate numerically to N→∞, indicating that the large-deviation principle holds.
Distribution of diameters for Erdős-Rényi random graphs
NASA Astrophysics Data System (ADS)
Hartmann, A. K.; Mézard, M.
2018-03-01
We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c . The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P (d ) numerically for various values of c , in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10-100 which allow us to obtain the distribution over basically the full range of the support, for graphs up to N =1000 nodes. For values c <1 , our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c >1 the distribution is more complex and no complete analytical results are available. For this parameter range, P (d ) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c , we determined the finite-size rate function Φ (d /N ) and were able to extrapolate numerically to N →∞ , indicating that the large-deviation principle holds.
An emergentist perspective on the origin of number sense
2018-01-01
The finding that human infants and many other animal species are sensitive to numerical quantity has been widely interpreted as evidence for evolved, biologically determined numerical capacities across unrelated species, thereby supporting a ‘nativist’ stance on the origin of number sense. Here, we tackle this issue within the ‘emergentist’ perspective provided by artificial neural network models, and we build on computer simulations to discuss two different approaches to think about the innateness of number sense. The first, illustrated by artificial life simulations, shows that numerical abilities can be supported by domain-specific representations emerging from evolutionary pressure. The second assumes that numerical representations need not be genetically pre-determined but can emerge from the interplay between innate architectural constraints and domain-general learning mechanisms, instantiated in deep learning simulations. We show that deep neural networks endowed with basic visuospatial processing exhibit a remarkable performance in numerosity discrimination before any experience-dependent learning, whereas unsupervised sensory experience with visual sets leads to subsequent improvement of number acuity and reduces the influence of continuous visual cues. The emergent neuronal code for numbers in the model includes both numerosity-sensitive (summation coding) and numerosity-selective response profiles, closely mirroring those found in monkey intraparietal neurons. We conclude that a form of innatism based on architectural and learning biases is a fruitful approach to understanding the origin and development of number sense. This article is part of a discussion meeting issue ‘The origins of numerical abilities'. PMID:29292348
Nosworthy, Nadia; Bugden, Stephanie; Archibald, Lisa; Evans, Barrie; Ansari, Daniel
2013-01-01
Recently, there has been a growing emphasis on basic number processing competencies (such as the ability to judge which of two numbers is larger) and their role in predicting individual differences in school-relevant math achievement. Children’s ability to compare both symbolic (e.g. Arabic numerals) and nonsymbolic (e.g. dot arrays) magnitudes has been found to correlate with their math achievement. The available evidence, however, has focused on computerized paradigms, which may not always be suitable for universal, quick application in the classroom. Furthermore, it is currently unclear whether both symbolic and nonsymbolic magnitude comparison are related to children’s performance on tests of arithmetic competence and whether either of these factors relate to arithmetic achievement over and above other factors such as working memory and reading ability. In order to address these outstanding issues, we designed a quick (2 minute) paper-and-pencil tool to assess children’s ability to compare symbolic and nonsymbolic numerical magnitudes and assessed the degree to which performance on this measure explains individual differences in achievement. Children were required to cross out the larger of two, single-digit numerical magnitudes under time constraints. Results from a group of 160 children from grades 1–3 revealed that both symbolic and nonsymbolic number comparison accuracy were related to individual differences in arithmetic achievement. However, only symbolic number comparison performance accounted for unique variance in arithmetic achievement. The theoretical and practical implications of these findings are discussed which include the use of this measure as a possible tool for identifying students at risk for future difficulties in mathematics. PMID:23844126
NASA Astrophysics Data System (ADS)
Vereshchagin, Gregory V.; Aksenov, Alexey G.
2017-02-01
Preface; Acknowledgements; Acronyms and definitions; Introduction; Part I. Theoretical Foundations: 1. Basic concepts; 2. Kinetic equation; 3. Averaging; 4. Conservation laws and equilibrium; 5. Relativistic BBGKY hierarchy; 6. Basic parameters in gases and plasmas; Part II. Numerical Methods: 7. The basics of computational physics; 8. Direct integration of Boltzmann equations; 9. Multidimensional hydrodynamics; Part III. Applications: 10. Wave dispersion in relativistic plasma; 11. Thermalization in relativistic plasma; 12. Kinetics of particles in strong fields; 13. Compton scattering in astrophysics and cosmology; 14. Self-gravitating systems; 15. Neutrinos, gravitational collapse and supernovae; Appendices; Bibliography; Index.
A Mathematical Model Of Dengue-Chikungunya Co-Infection In A Closed Population
NASA Astrophysics Data System (ADS)
Aldila, Dipo; Ria Agustin, Maya
2018-03-01
Dengue disease has been a major health problem in many tropical and sub-tropical countries since the early 1900s. On the other hand, according to a 2017 WHO fact sheet, Chikungunya was detected in the first outbreak in 1952 in Tanzania and has continued increasing until now in many tropical and sub-tropical countries. Both these diseases are vector-borne diseases which are spread by the same mosquito, i.e. the female Aedes aegypti. According to the WHO report, there is a great possibility that humans and mosquitos might be infected by dengue and chikungunya at the same time. Here in this article, a mathematical model approach will be used to understand the spread of dengue and chikungunya in a closed population. A model is developed as a nine-dimensional deterministic ordinary differential equation. Equilibrium points and their local stability are analyzed analytically and numerically. We find that the basic reproduction number, the endemic indicator, is given by the maximum of three different basic reproduction numbers of a complete system, i.e. basic reproduction numbers for dengue, chikungunya and for co-infection between dengue and chikungunya. We find that the basic reproduction number for the co-infection sub-system dominates other basic reproduction numbers whenever it is larger than one. Some numerical simulations are provided to confirm these analytical results.
Iterative discrete ordinates solution of the equation for surface-reflected radiance
NASA Astrophysics Data System (ADS)
Radkevich, Alexander
2017-11-01
This paper presents a new method of numerical solution of the integral equation for the radiance reflected from an anisotropic surface. The equation relates the radiance at the surface level with BRDF and solutions of the standard radiative transfer problems for a slab with no reflection on its surfaces. It is also shown that the kernel of the equation satisfies the condition of the existence of a unique solution and the convergence of the successive approximations to that solution. The developed method features two basic steps: discretization on a 2D quadrature, and solving the resulting system of algebraic equations with successive over-relaxation method based on the Gauss-Seidel iterative process. Presented numerical examples show good coincidence between the surface-reflected radiance obtained with DISORT and the proposed method. Analysis of contributions of the direct and diffuse (but not yet reflected) parts of the downward radiance to the total solution is performed. Together, they represent a very good initial guess for the iterative process. This fact ensures fast convergence. The numerical evidence is given that the fastest convergence occurs with the relaxation parameter of 1 (no relaxation). An integral equation for BRDF is derived as inversion of the original equation. The potential of this new equation for BRDF retrievals is analyzed. The approach is found not viable as the BRDF equation appears to be an ill-posed problem, and it requires knowledge the surface-reflected radiance on the entire domain of both Sun and viewing zenith angles.
Critical transport issues for improving the performance of aqueous redox flow batteries
NASA Astrophysics Data System (ADS)
Zhou, X. L.; Zhao, T. S.; An, L.; Zeng, Y. K.; Wei, L.
2017-01-01
As the fraction of electricity generated from intermittent renewable sources (such as solar and wind) grows, developing reliable energy storage technologies to store electrical energy in large scale is of increasing importance. Redox flow batteries are now enjoying a renaissance and regarded as a leading technology in providing a well-balanced solution for current daunting challenges. In this article, state-of-the-art studies of the complex multicomponent transport phenomena in aqueous redox flow batteries, with a special emphasis on all-vanadium redox flow batteries, are reviewed and summarized. Rather than elaborating on the details of previous experimental and numerical investigations, this article highlights: i) the key transport issues in each battery's component that need to be tackled so that the rate capability and cycling stability of flow batteries can be significantly improved, ii) the basic mechanisms that control the active species/ion/electron transport behaviors in each battery's component, and iii) the key experimental and numerical findings regarding the correlations between the multicomponent transport processes and battery performance.
Mammalian Krüppel-Like Factors in Health and Diseases
McConnell, Beth B.; Yang, Vincent W.
2010-01-01
The Krüppel-like factor (KLF) family of transcription factors regulates diverse biological processes that include proliferation, differentiation, growth, development, survival, and responses to external stress. Seventeen mammalian KLFs have been identified, and numerous studies have been published that describe their basic biology and contribution to human diseases. KLF proteins have received much attention because of their involvement in the development and homeostasis of numerous organ systems. KLFs are critical regulators of physiological systems that include the cardiovascular, digestive, respiratory, hematological, and immune systems and are involved in disorders such as obesity, cardiovascular disease, cancer, and inflammatory conditions. Furthermore, KLFs play an important role in reprogramming somatic cells into induced pluripotent stem (iPS) cells and maintaining the pluripotent state of embryonic stem cells. As research on KLF proteins progresses, additional KLF functions and associations with disease are likely to be discovered. Here, we review the current knowledge of KLF proteins and describe common attributes of their biochemical and physiological functions and their pathophysiological roles. PMID:20959618
Efficient solution of parabolic equations by Krylov approximation methods
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1990-01-01
Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.
Spectral transfers and zonal flow dynamics in the generalized Charney-Hasegawa-Mima model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lashmore-Davies, C.N.; Thyagaraja, A.; McCarthy, D.R.
2005-12-15
The mechanism of four nonlinearly interacting drift or Rossby waves is used as the basic process underlying the turbulent evolution of both the Charney-Hasegawa-Mima-equation (CHME) and its generalized modification (GCHME). Hasegawa and Kodama's concept of equivalent action (or quanta) is applied to the four-wave system and shown to control the distribution of energy and enstrophy between the modes. A numerical study of the GCHME is described in which the initial state contains a single finite-amplitude drift wave (the pump wave), and all the modulationally unstable modes are present at the same low level (10{sup -6} times the pump amplitude). Themore » simulation shows that at first the fastest-growing modulationally unstable modes dominate but reveals that at a later time, before pump depletion occurs, long- and short-wavelength modes, driven by pairs of fast-growing modes, grow at 2{gamma}{sub max}. The numerical simulation illustrates the development of a spectrum of turbulent modes from a finite-amplitude pump wave.« less
Basic theory for polarized, astrophysical maser radiation in a magnetic field
NASA Technical Reports Server (NTRS)
Watson, William D.
1994-01-01
Fundamental alterations in the theory and resulting behavior of polarized, astrophysical maser radiation in the presence of a magnetic field have been asserted based on a calculation of instabilities in the radiative transfer. I reconsider the radiative transfer and find that the relevant instabilities do not occur. Calculational errors in the previous investigation are identified. In addition, such instabilities would have appeared -- but did not -- in the numerous numerical solutions to the same radiative transfer equations that have been presented in the literature. As a result, all modifications that have been presented in a recent series of papers (Elitzur 1991, 1993) to the theory for polarized maser radiation in the presence of a magnetic field are invalid. The basic theory is thus clarified.
Photodynamic therapy with 5-aminolevulinic acid: basic principles and applications
NASA Astrophysics Data System (ADS)
Pottier, Roy H.; Kennedy, James C.
1996-01-01
Numerous photosensitizing pigments that absorb visible light and are selectively retained in neoplastic tissue are being investigated as potential photochemotherapeutic agents. While much emphasis is being placed on the synthesis of new, far-red absorbing photosensitizers, an alternative approach has been to stimulate the human body to produce its own natural photosensitizer, namely protoporphyrin IX (PpIX). Exogenous 5-aminolevulinic acid (ALA) is rapidly bioconverted into PP by mitochondria, the process being particularly efficient in tumor cells. Since PpIX has a natural and rapid clearing mechanism (via the capture of iron in the process of being converted into heme), ALA-PDT does not suffer from lingering skin phototoxicity. ALA may be introduced orally, intravenously, or topically, and ALA-PDT has been shown to be effective in the treatment of both malignant and non-malignant lesions.
Commensal or pathogen – a challenge to fulfil Koch’s Postulates
Hess, M.
2017-01-01
ABSTRACT 1. Infectious diseases have a large impact on poultry health and economics. Elucidating the pathogenesis of a certain disease is crucial to implement control strategies. 2. Multiplication of a pathogen and its characterisation in vitro are basic requirements to perform experimental studies. However, passaging of the pathogen in vitro can influence the pathogenicity, a process targeted for live vaccine development, but limits the reproduction of clinical signs. 3. Numerous factors can influence the outcome of experimental infections with some importance on the pathogen, application route and host as exemplarily outlined for Histomonas meleagridis, Gallibacterium anatis and fowl aviadenoviruses (FAdVs). 4. In future, more comprehensive and detailed settings are needed to obtain as much information as possible from animal experiments. Processing of samples with modern diagnostic tools provides the option to closely monitor the host–pathogen interaction. PMID:27724044
25 CFR 15.11 - What are the basic steps of the probate process?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false What are the basic steps of the probate process? 15.11 Section 15.11 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR PROBATE PROBATE OF INDIAN... are the basic steps of the probate process? The basic steps of the probate process are: (a) We learn...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orús, Román, E-mail: roman.orus@uni-mainz.de
This is a partly non-technical introduction to selected topics on tensor network methods, based on several lectures and introductory seminars given on the subject. It should be a good place for newcomers to get familiarized with some of the key ideas in the field, specially regarding the numerics. After a very general introduction we motivate the concept of tensor network and provide several examples. We then move on to explain some basics about Matrix Product States (MPS) and Projected Entangled Pair States (PEPS). Selected details on some of the associated numerical methods for 1d and 2d quantum lattice systems aremore » also discussed. - Highlights: • A practical introduction to selected aspects of tensor network methods is presented. • We provide analytical examples of MPS and 2d PEPS. • We provide basic aspects on several numerical methods for MPS and 2d PEPS. • We discuss a number of applications of tensor network methods from a broad perspective.« less
Identification of cancer-related miRNA-lncRNA biomarkers using a basic miRNA-lncRNA network.
Zhang, Guangle; Pian, Cong; Chen, Zhi; Zhang, Jin; Xu, Mingmin; Zhang, Liangyun; Chen, Yuanyuan
2018-01-01
LncRNAs are regulatory noncoding RNAs that play crucial roles in many biological processes. The dysregulation of lncRNA is thought to be involved in many complex diseases; lncRNAs are often the targets of miRNAs in the indirect regulation of gene expression. Numerous studies have indicated that miRNA-lncRNA interactions are closely related to the occurrence and development of cancers. Thus, it is important to develop an effective method for the identification of cancer-related miRNA-lncRNA interactions. In this study, we compiled 155653 experimentally validated and predicted miRNA-lncRNA associations, which we defined as basic interactions. We next constructed an individual-specific miRNA-lncRNA network (ISMLN) for each cancer sample and a basic miRNA-lncRNA network (BMLN) for each type of cancer by examining the expression profiles of miRNAs and lncRNAs in the TCGA (The Cancer Genome Atlas) database. We then selected potential miRNA-lncRNA biomarkers based on the BLMN. Using this method, we identified cancer-related miRNA-lncRNA biomarkers and modules specific to a certain cancer. This method of profiling will contribute to the diagnosis and treatment of cancers at the level of gene regulatory networks.
ERIC Educational Resources Information Center
Spüler, Martin; Walter, Carina; Rosenstiel, Wolfgang; Gerjets, Peter; Moeller, Korbinian; Klein, Elise
2016-01-01
Numeracy is a key competency for living in our modern knowledge society. Therefore, it is essential to support numerical learning from basic to more advanced competency levels. From educational psychology it is known that learning is most effective when the respective content is neither too easy nor too demanding in relation to learners'…
On the theory of behavioral mechanics.
Dzendolet, E
1999-12-01
The Theory of Behavioral Mechanics is the behavioral analogue of Newton's laws of motion, with the rate of responding in operant conditioning corresponding to physical velocity. In an earlier work, the basic relation between rate of responding and sessions under two FI schedules and over a range of commonly used session values had been shown to be a power function. Using that basic relation, functions for behavioral acceleration, mass, and momentum are derived here. Data from other laboratories also support the applicability of a power function to VI schedules. A particular numerical value is introduced here to be the standard reference value for the behavioral force under the VI-60-s schedule. This reference allows numerical values to be calculated for the behavioral mass and momentum of individual animals. A comparison of the numerical values of the momenta of two animals can be used to evaluate their relative resistances to change, e.g., to extinction, which is itself viewed as a continuously changing behavioral force being imposed on the animal. This overall numerical approach allows behavioral force-values to be assigned to various experimental conditions such as the evaluation of the behavioral force of a medication dosage.
Moura, Ricardo; Wood, Guilherme; Pinheiro-Chagas, Pedro; Lonnemann, Jan; Krinzinger, Helga; Willmes, Klaus; Haase, Vitor Geraldi
2013-11-01
Transcoding between numerical systems is one of the most basic abilities acquired by children during their early school years. One important topic that requires further exploration is how mathematics proficiency can affect number transcoding. The aim of the current study was to investigate transcoding abilities (i.e., reading Arabic numerals and writing dictation) in Brazilian children with and without mathematics difficulties, focusing on different school grades. We observed that children with learning difficulties in mathematics demonstrated lower achievement in number transcoding in both early and middle elementary school. In early elementary school, difficulties were observed in both the basic numerical lexicon and the management of numerical syntax. In middle elementary school, difficulties appeared mainly in the transcoding of more complex numbers. An error analysis revealed that the children with mathematics difficulties struggled mainly with the acquisition of transcoding rules. Although we confirmed the previous evidence on the impact of working memory capacity on number transcoding, we found that it did not fully account for the observed group differences. The results are discussed in the context of a maturational lag in number transcoding ability in children with mathematics difficulties. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Abbas, M. M.; Craven, P. D.; Spann, J. F.; Tankosic, D.; LeClair, A.; Gallagher, D. L.; West, E. A.; Weingartner, J. C.; Witherow, W. K.; Tielens, A. G. G. M.
2004-01-01
The processes and mechanisms involved in the rotation and alignment of interstellar dust grains have been of great interest in astrophysics ever since the surprising discovery of the polarization of starlight more than half a century ago. Numerous theories, detailed mathematical models, and numerical studies of grain rotation and alignment with respect to the Galactic magnetic field have been presented in the literature. In particular, the subject of grain rotation and alignment by radiative torques has been shown to be of particular interest in recent years. However, despite many investigations, a satisfactory theoretical understanding of the processes involved in subject, we have carried out some unique experiments to illuminate the processes involved in the rotation of dust grains in the interstellar medium. In this paper we present the results of some preliminary laboratory experiments on the rotation of individual micron/submicron-sized, nonspherical dust grains levitated in an electrodynamic balance evacuated to pressures of approximately 10(exp -3) to 10(exp -5) torr. The particles are illuminated by laser light at 5320 A, and the grain rotation rates are obtained by analyzing the low-frequency (approximately 0 - 100 kHz) signal of the scattered light detected by a photodiode detector. The rotation rates are compared with simple theoretical models to retrieve some basic rotational parameters. The results are examined in light of the current theories of alignment.
McMullan, Miriam; Jones, Ray; Lea, Susan
2010-04-01
This paper is a report of a correlational study of the relations of age, status, experience and drug calculation ability to numerical ability of nursing students and Registered Nurses. Competent numerical and drug calculation skills are essential for nurses as mistakes can put patients' lives at risk. A cross-sectional study was carried out in 2006 in one United Kingdom university. Validated numerical and drug calculation tests were given to 229 second year nursing students and 44 Registered Nurses attending a non-medical prescribing programme. The numeracy test was failed by 55% of students and 45% of Registered Nurses, while 92% of students and 89% of nurses failed the drug calculation test. Independent of status or experience, older participants (> or = 35 years) were statistically significantly more able to perform numerical calculations. There was no statistically significant difference between nursing students and Registered Nurses in their overall drug calculation ability, but nurses were statistically significantly more able than students to perform basic numerical calculations and calculations for solids, oral liquids and injections. Both nursing students and Registered Nurses were statistically significantly more able to perform calculations for solids, liquid oral and injections than calculations for drug percentages, drip and infusion rates. To prevent deskilling, Registered Nurses should continue to practise and refresh all the different types of drug calculations as often as possible with regular (self)-testing of their ability. Time should be set aside in curricula for nursing students to learn how to perform basic numerical and drug calculations. This learning should be reinforced through regular practice and assessment.
Yuen, Yeo Tze; Sharratt, Paul N; Jie, Bu
2016-11-01
Numerous carbon dioxide mineralization (CM) processes have been proposed to overcome the slow rate of natural weathering of silicate minerals. Ten of these proposals are mentioned in this article. The proposals are described in terms of the four major areas relating to CM process design: pre-treatment, purification, carbonation, and reagent recycling operations. Any known specifics based on probable or representative operating and reaction conditions are listed, and basic analysis of the strengths and shortcomings associated with the individual process designs are given in this article. The processes typically employ physical or chemical pseudo-catalytic methods to enhance the rate of carbon dioxide mineralization; however, both methods have its own associated advantages and problems. To examine the feasibility of a CM process, three key aspects should be included in the evaluation criteria: energy use, operational considerations as well as product value and economics. Recommendations regarding the optimal level of emphasis and implementation of measures to control these aspects are given, and these will depend very much on the desired process objectives. Ultimately, a mix-and-match approach to process design might be required to provide viable and economic proposals for CM processes.
Global stability for epidemic models on multiplex networks.
Huang, Yu-Jhe; Juang, Jonq; Liang, Yu-Hao; Wang, Hsin-Yu
2018-05-01
In this work, we consider an epidemic model in a two-layer network in which the dynamics of susceptible-infected-susceptible process in the physical layer coexists with that of a cyclic process of unaware-aware-unaware in the virtual layer. For such multiplex network, we shall define the basic reproduction number [Formula: see text] in the virtual layer, which is similar to the basic reproduction number [Formula: see text] defined in the physical layer. We show analytically that if [Formula: see text] and [Formula: see text], then the disease and information free equilibrium is globally stable and if [Formula: see text] and [Formula: see text], then the disease free and information saturated equilibrium is globally stable for all initial conditions except at the origin. In the case of [Formula: see text], whether the disease dies out or not depends on the competition between how well the information is transmitted in the virtual layer and how contagious the disease is in the physical layer. In particular, it is numerically demonstrated that if the difference in [Formula: see text] and [Formula: see text] is greater than the product of [Formula: see text], the deviation of [Formula: see text] from 1 and the relative infection rate for an aware susceptible individual, then the disease dies out. Otherwise, the disease breaks out.
Numerical analysis of effects of ion-neutral collision processes on RF ICP discharge
NASA Astrophysics Data System (ADS)
Nishida, K.; Mattei, S.; Lettry, J.; Hatayama, A.
2018-01-01
The discharge process of a radiofrequency (RF) inductively coupled plasma (ICP) has been modeled by an ElectroMagnetic Particle-in-Cell Monte Carlo Collision method (EM PIC-MCC). Although the simulation had been performed by our previous model to investigate the discharge mode transition of the RF ICP from a kinetic point of view, the model neglected the collision processes of ions (H+ and H2+) with neutral particles. In this study, the RF ICP discharge process has been investigated by the latest version of the model which takes the ion-neutral collision processes into account. The basic characteristics of the discharge mode transition provided by the previous model have been verified by the comparison between the previous and present results. As for the H-mode discharge regime, on the other hand, the ion-neutral collisions play an important role in evaluating the growth of the plasma. Also, the effect of the ion-neutral collisions on the kinetic feature of the plasma has been investigated, which has highlighted the importance of kinetic perspective for modeling the RF ICP discharge.
NASA Technical Reports Server (NTRS)
Waldman, H.
1971-01-01
The long-term variations in the daytime exchange flux are estimated with the use of model hydrogen concentrations based on the inverse relationship between the abundance of neutral hydrogen and the neutral temperature in the thermosphere. The results are found to be compatible with the observed long-term behavior of the ionospheric electron content at a midlatitude location, as revealed by Faraday observations using geostationary satellites. The basic processes occurring in the ionosphere are reviewed, with emphasis on the concepts of limiting velocity and limiting flux. An approach to the problem of numerical simulation of the ionosphere is also presented and discussed.
Close to real life. [solving for transonic flow about lifting airfoils using supercomputers
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Bailey, F. Ron
1988-01-01
NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.
Ito, Tetsufumi; Oliver, Douglas L.
2012-01-01
The inferior colliculus (IC) in the midbrain of the auditory system uses a unique basic circuit to organize the inputs from virtually all of the lower auditory brainstem and transmit this information to the medial geniculate body (MGB) in the thalamus. Here, we review the basic circuit of the IC, the neuronal types, the organization of their inputs and outputs. We specifically discuss the large GABAergic (LG) neurons and how they differ from the small GABAergic (SG) and the more numerous glutamatergic neurons. The somata and dendrites of LG neurons are identified by axosomatic glutamatergic synapses that are lacking in the other cell types and exclusively contain the glutamate transporter VGLUT2. Although LG neurons are most numerous in the central nucleus of the IC (ICC), an analysis of their distribution suggests that they are not specifically associated with one set of ascending inputs. The inputs to ICC may be organized into functional zones with different subsets of brainstem inputs, but each zone may contain the same three neuron types. However, the sources of VGLUT2 axosomatic terminals on the LG neuron are not known. Neurons in the dorsal cochlear nucleus, superior olivary complex, intermediate nucleus of the lateral lemniscus, and IC itself that express the gene for VGLUT2 only are the likely origin of the dense VGLUT2 axosomatic terminals on LG tectothalamic neurons. The IC is unique since LG neurons are GABAergic tectothalamic neurons in addition to the numerous glutamatergic tectothalamic neurons. SG neurons evidently target other auditory structures. The basic circuit of the IC and the LG neurons in particular, has implications for the transmission of information about sound through the midbrain to the MGB. PMID:22855671
On the numerical treatment of nonlinear source terms in reaction-convection equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1992-01-01
The objectives of this paper are to investigate how various numerical treatments of the nonlinear source term in a model reaction-convection equation can affect the stability of steady-state numerical solutions and to show under what conditions the conventional linearized analysis breaks down. The underlying goal is to provide part of the basic building blocks toward the ultimate goal of constructing suitable numerical schemes for hypersonic reacting flows, combustions and certain turbulence models in compressible Navier-Stokes computations. It can be shown that nonlinear analysis uncovers much of the nonlinear phenomena which linearized analysis is not capable of predicting in a model reaction-convection equation.
Convection Induced by Traveling Magnetic Fields in Semiconductor Melts
NASA Technical Reports Server (NTRS)
Konstantin, Mazuruk
2000-01-01
Axisymmetric traveling magnetic fields (TMF) can be beneficial for crystal growth applications. such as the vertical Bridgman, float zone or traveling heater methods. TMF induces a basic flow in the form of a single roll. This type of flow can enhance mass and heat transfer to the growing crystal. More importantly, the TMF Lorentz body force induced in the system can counterbalance the buoyancy forces, so the resulting convection can be much smaller and even the direction of it can be changed. In this presentation, we display basic features of this novel technique. In particular, numerical calculations of the Lorentz force for arbitrary frequencies will be presented along with induced steady-state fluid flow profiles. Also, numerical modeling of the TMF counter-balancing natural convection in vertical Bridgman systems will be demonstrated.
NASA Astrophysics Data System (ADS)
Agarwal, P.; El-Sayed, A. A.
2018-06-01
In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.
Arginine mimetic structures in biologically active antagonists and inhibitors.
Masic, Lucija Peterlin
2006-01-01
Peptidomimetics have found wide application as bioavailable, biostable, and potent mimetics of naturally occurring biologically active peptides. L-Arginine is a guanidino group-containing basic amino acid, which is positively charged at neutral pH and is involved in many important physiological and pathophysiological processes. Many enzymes display a preference for the arginine residue that is found in many natural substrates and in synthetic inhibitors of many trypsin-like serine proteases, e.g. thrombin, factor Xa, factor VIIa, trypsin, and in integrin receptor antagonists, used to treat many blood-coagulation disorders. Nitric oxide (NO), which is produced by oxidation of L-arginine in an NADPH- and O(2)-dependent process catalyzed by isoforms of nitric oxide synthase (NOS), exhibits diverse roles in both normal and pathological physiologies and has been postulated to be a contributor to the etiology of various diseases. Development of NOS inhibitors as well as analogs and mimetics of the natural substrate L-arginine, is desirable for potential therapeutic use and for a better understanding of their conformation when bound in the arginine binding site. The guanidino residue of arginine in many substrates, inhibitors, and antagonists forms strong ionic interactions with the carboxylate of an aspartic acid moiety, which provides specificity for the basic amino acid residue in the active side. However, a highly basic guanidino moiety incorporated in enzyme inhibitors or receptor antagonists is often associated with low selectivity and poor bioavailability after peroral application. Thus, significant effort is focused on the design and preparation of arginine mimetics that can confer selective inhibition for specific trypsin-like serine proteases and NOS inhibitors as well as integrin receptor antagonists and possess reduced basicity for enhanced oral bioavailability. This review will describe the survey of arginine mimetics designed to mimic the function of the arginine moiety in numerous peptidomimetic compounds (thrombin inhibitors, factor Xa inhibitors, factor VIIa inhibitors, integrin receptor antagonists, nitric oxide synthase inhibitors), with the aim of obtaining better activity, selectivity and oral bioavailability.
Collapse of a Liquid Column: Numerical Simulation and Experimental Validation
NASA Astrophysics Data System (ADS)
Cruchaga, Marcela A.; Celentano, Diego J.; Tezduyar, Tayfun E.
2007-03-01
This paper is focused on the numerical and experimental analyses of the collapse of a liquid column. The measurements of the interface position in a set of experiments carried out with shampoo and water for two different initial column aspect ratios are presented together with the corresponding numerical predictions. The experimental procedure was found to provide acceptable recurrence in the observation of the interface evolution. Basic models describing some of the relevant physical aspects, e.g. wall friction and turbulence, are included in the simulations. Numerical experiments are conducted to evaluate the influence of the parameters involved in the modeling by comparing the results with the data from the measurements. The numerical predictions reasonably describe the physical trends.
Workbook, Basic Mathematics and Wastewater Processing Calculations.
ERIC Educational Resources Information Center
New York State Dept. of Environmental Conservation, Albany.
This workbook serves as a self-learning guide to basic mathematics and treatment plant calculations and also as a reference and source book for the mathematics of sewage treatment and processing. In addition to basic mathematics, the workbook discusses processing and process control, laboratory calculations and efficiency calculations necessary in…
Finite element modelling of chain-die forming for ultra-high strength steel
NASA Astrophysics Data System (ADS)
Majji, Raju; Xiang, Yang; Ding, Scott; Yang, Chunhui
2017-10-01
There has been a high demand for weight reduction in automotive vehicles while maintaining passenger safety. A potential steel material to achieve this is Ultra High Strength Steel (UHSS). As a high strength material, it is difficult to be formed with desired profiles using traditional sheet metal forming processes such as Cold Roll Forming. To overcome this problem, a potentially alternative solution is Chain-die Forming (CDF), recently developed. The basic principal of the CDF is to fully combine roll forming and bending processes. The main advantage of this process is the elongated deformation length that significantly increases effective roll radius. This study focuses on identifying issues with the CDF by using CAD modelling, Motion Analysis and Finite Element Analysis (FEA) to devise solutions and construct a more reliable process in an optimal design sense. Some attempts on finite element modelling and simulation of the CDF were conducted using relatively simple models in literature and the research was still not sufficient enough for optimal design of a typical CDF for UHSS. Therefore two numerical models of Chain-die Forming process are developed in this study, including a) one having a set of rolls similar to roll forming but with a large radius, i.e., 20 meters; and b) the other one with dies and punch segments similar to a typical CDF machine. As a case study, to form a 60° channel with single pass was conducted using these two devised models for a comparison. The obtained numerical results clearly show the CDF could generate less residual stress, low strain and small springback of a single pass for the 60° UHSS channel. The design analysis procedure proposed in this study could greatly help the mechanical designers to devise a cost-effective and reliable CDF process for forming UHSS.
Transportation systems evaluation methodology development and applications, phase 3
NASA Technical Reports Server (NTRS)
Kuhlthau, A. R.; Jacobson, I. D.; Richards, L. C.
1981-01-01
Transportation systems or proposed changes in current systems are evaluated. Four principal evaluation criteria are incorporated in the process, operating performance characteristics as viewed by potential users, decisions based on the perceived impacts of the system, estimating what is required to reduce the system to practice; and predicting the ability of the concept to attract financial support. A series of matrix multiplications in which the various matrices represent evaluations in a logical sequence of the various discrete steps in a management decision process is used. One or more alternatives are compared with the current situation, and the result provides a numerical rating which determines the desirability of each alternative relative to the norm and to each other. The steps in the decision process are isolated so that contributions of each to the final result are readily analyzed. The ability to protect against bias on the part of the evaluators, and the fact that system parameters which are basically qualitative in nature can be easily included are advantageous.
Li, Q.; Kang, Q. J.; Francois, M. M.; ...
2015-03-03
A hybrid thermal lattice Boltzmann (LB) model is presented to simulate thermal multiphase flows with phase change based on an improved pseudopotential LB approach (Li et al., 2013). The present model does not suffer from the spurious term caused by the forcing-term effect, which was encountered in some previous thermal LB models for liquid–vapor phase change. Using the model, the liquid–vapor boiling process is simulated. The boiling curve together with the three boiling stages (nucleate boiling, transition boiling, and film boiling) is numerically reproduced in the LB community for the first time. The numerical results show that the basic featuresmore » and the fundamental characteristics of boiling heat transfer are well captured, such as the severe fluctuation of transient heat flux in the transition boiling and the feature that the maximum heat transfer coefficient lies at a lower wall superheat than that of the maximum heat flux. Moreover, the effects of the heating surface wettability on boiling heat transfer are investigated. It is found that an increase in contact angle promotes the onset of boiling but reduces the critical heat flux, and makes the boiling process enter into the film boiling regime at a lower wall superheat, which is consistent with the findings from experimental studies.« less
NOAA Atmospheric Sciences Modeling Division support to the US Environmental Protection Agency
NASA Astrophysics Data System (ADS)
Poole-Kober, Evelyn M.; Viebrock, Herbert J.
1991-07-01
During FY-1990, the Atmospheric Sciences Modeling Division provided meteorological research and operational support to the U.S. Environmental Protection Agency. Basic meteorological operational support consisted of applying dispersion models and conducting dispersion studies and model evaluations. The primary research effort was the development and evaluation of air quality simulation models using numerical and physical techniques supported by field studies. Modeling emphasis was on the dispersion of photochemical oxidants and particulate matter on urban and regional scales, dispersion in complex terrain, and the transport, transformation, and deposition of acidic materials. Highlights included expansion of the Regional Acid Deposition Model/Engineering Model family to consist of the Tagged Species Engineering Model, the Non-Depleting Model, and the Sulfate Tracking Model; completion of the Acid-MODES field study; completion of the RADM2.1 evaluation; completion of the atmospheric processes section of the National Acid Precipitation Assessment Program 1990 Integrated Assessment; conduct of the first field study to examine the transport and entrainment processes of convective clouds; development of a Regional Oxidant Model-Urban Airshed Model interface program; conduct of an international sodar intercomparison experiment; incorporation of building wake dispersion in numerical models; conduct of wind-tunnel simulations of stack-tip downwash; and initiation of the publication of SCRAM NEWS.
Severe Storms Branch research report (April 1984 April 1985)
NASA Technical Reports Server (NTRS)
Dubach, L. (Editor)
1985-01-01
The Mesoscale Atmospheric Processes Research Program is a program of integrated studies which are to achieve an improved understanding of the basic behavior of the atmosphere through the use of remotely sensed data and space technology. The program consist of four elements: (1) special observations and analysis of mesoscale systems; (20 the development of quanitative algorithms to use remotely sensed observations; (3) the development of new observing systems; and (4) numerical modeling. The Severe Storms Branch objectives are the improvement of the understanding, diagnosis, and prediction of a wide range of atmospheric storms, which includes severe thunderstorms, tornadoes, flash floods, tropical cyclones, and winter snowstorms. The research often shed light upon various aspects of local weather, such as fog, sea breezes, air pollution, showers, and other products of nonsevere cumulus cloud clusters. The part of the program devoted to boundary layer processes, gust front interactions, and soil moisture detection from satellites gives insights into storm growth and behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danilovic, S.; Solanki, S. K.; Barthol, P.
Ellerman Bombs are signatures of magnetic reconnection, which is an important physical process in the solar atmosphere. How and where they occur is a subject of debate. In this paper, we analyze Sunrise/IMaX data, along with 3D MHD simulations that aim to reproduce the exact scenario proposed for the formation of these features. Although the observed event seems to be more dynamic and violent than the simulated one, simulations clearly confirm the basic scenario for the production of EBs. The simulations also reveal the full complexity of the underlying process. The simulated observations show that the Fe i 525.02 nm linemore » gives no information on the height where reconnection takes place. It can only give clues about the heating in the aftermath of the reconnection. However, the information on the magnetic field vector and velocity at this spatial resolution is extremely valuable because it shows what numerical models miss and how they can be improved.« less
A post-processing method to simulate the generalized RF sheath boundary condition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myra, James R.; Kohno, Haruhiko
For applications of ICRF power in fusion devices, control of RF sheath interactions is of great importance. A sheath boundary condition (SBC) was previously developed to provide an effective surface impedance for the interaction of the RF sheath with the waves. The SBC enables the surface power flux and rectified potential energy available for sputtering to be calculated. For legacy codes which cannot easily implement the SBC, or to speed convergence in codes which do implement it, we consider here an approximate method to simulate SBCs by post-processing results obtained using other, e.g. conducting wall, boundary conditions. The basic approximationmore » is that the modifications resulting from the generalized SBC are driven by a fixed incoming wave which could be either a fast wave or a slow wave. Finally, the method is illustrated in slab geometry and compared with exact numerical solutions; it is shown to work very well.« less
A post-processing method to simulate the generalized RF sheath boundary condition
Myra, James R.; Kohno, Haruhiko
2017-10-23
For applications of ICRF power in fusion devices, control of RF sheath interactions is of great importance. A sheath boundary condition (SBC) was previously developed to provide an effective surface impedance for the interaction of the RF sheath with the waves. The SBC enables the surface power flux and rectified potential energy available for sputtering to be calculated. For legacy codes which cannot easily implement the SBC, or to speed convergence in codes which do implement it, we consider here an approximate method to simulate SBCs by post-processing results obtained using other, e.g. conducting wall, boundary conditions. The basic approximationmore » is that the modifications resulting from the generalized SBC are driven by a fixed incoming wave which could be either a fast wave or a slow wave. Finally, the method is illustrated in slab geometry and compared with exact numerical solutions; it is shown to work very well.« less
NASA Astrophysics Data System (ADS)
Weng, Tongfeng; Zhang, Jie; Small, Michael; Harandizadeh, Bahareh; Hui, Pan
2018-03-01
We propose a unified framework to evaluate and quantify the search time of multiple random searchers traversing independently and concurrently on complex networks. We find that the intriguing behaviors of multiple random searchers are governed by two basic principles—the logarithmic growth pattern and the harmonic law. Specifically, the logarithmic growth pattern characterizes how the search time increases with the number of targets, while the harmonic law explores how the search time of multiple random searchers varies relative to that needed by individual searchers. Numerical and theoretical results demonstrate these two universal principles established across a broad range of random search processes, including generic random walks, maximal entropy random walks, intermittent strategies, and persistent random walks. Our results reveal two fundamental principles governing the search time of multiple random searchers, which are expected to facilitate investigation of diverse dynamical processes like synchronization and spreading.
LSSA large area silicon sheet task continuous Czochralski process development
NASA Technical Reports Server (NTRS)
Rea, S. N.
1978-01-01
A Czochralski crystal growing furnace was converted to a continuous growth facility by installation of a premelter to provide molten silicon flow into the primary crucible. The basic furnace is operational and several trial crystals were grown in the batch mode. Numerous premelter configurations were tested both in laboratory-scale equipment as well as in the actual furnace. The best arrangement tested to date is a vertical, cylindrical graphite heater containing small fused silicon test tube liner in which the incoming silicon is melted and flows into the primary crucible. Economic modeling of the continuous Czochralski process indicates that for 10 cm diameter crystal, 100 kg furnace runs of four or five crystals each are near-optimal. Costs tend to asymptote at the 100 kg level so little additional cost improvement occurs at larger runs. For these conditions, crystal cost in equivalent wafer area of around $20/sq m exclusive of polysilicon and slicing was obtained.
NASA Technical Reports Server (NTRS)
Seymour, David C.; Martin, Michael A.; Nguyen, Huy H.; Greene, William D.
2005-01-01
The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.
NASA Technical Reports Server (NTRS)
Martin, Michael A.; Nguyen, Huy H.; Greene, William D.; Seymout, David C.
2003-01-01
The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.
Premixed autoignition in compressible turbulence
NASA Astrophysics Data System (ADS)
Konduri, Aditya; Kolla, Hemanth; Krisman, Alexander; Chen, Jacqueline
2016-11-01
Prediction of chemical ignition delay in an autoignition process is critical in combustion systems like compression ignition engines and gas turbines. Often, ignition delay times measured in simple homogeneous experiments or homogeneous calculations are not representative of actual autoignition processes in complex turbulent flows. This is due the presence of turbulent mixing which results in fluctuations in thermodynamic properties as well as chemical composition. In the present study the effect of fluctuations of thermodynamic variables on the ignition delay is quantified with direct numerical simulations of compressible isotropic turbulence. A premixed syngas-air mixture is used to remove the effects of inhomogeneity in the chemical composition. Preliminary results show a significant spatial variation in the ignition delay time. We analyze the topology of autoignition kernels and identify the influence of extreme events resulting from compressibility and intermittency. The dependence of ignition delay time on Reynolds and turbulent Mach numbers is also quantified. Supported by Basic Energy Sciences, Dept of Energy, United States.
Seasonal erosion and restoration of Mars' northern polar dunes.
Hansen, C J; Bourke, M; Bridges, N T; Byrne, S; Colon, C; Diniega, S; Dundas, C; Herkenhoff, K; McEwen, A; Mellon, M; Portyankina, G; Thomas, N
2011-02-04
Despite radically different environmental conditions, terrestrial and martian dunes bear a strong resemblance, indicating that the basic processes of saltation and grainfall (sand avalanching down the dune slipface) operate on both worlds. Here, we show that martian dunes are subject to an additional modification process not found on Earth: springtime sublimation of Mars' CO(2) seasonal polar caps. Numerous dunes in Mars' north polar region have experienced morphological changes within a Mars year, detected in images acquired by the High-Resolution Imaging Science Experiment on the Mars Reconnaissance Orbiter. Dunes show new alcoves, gullies, and dune apron extension. This is followed by remobilization of the fresh deposits by the wind, forming ripples and erasing gullies. The widespread nature of these rapid changes, and the pristine appearance of most dunes in the area, implicates active sand transport in the vast polar erg in Mars' current climate.
Tidal disruption of inviscid protoplanets
NASA Technical Reports Server (NTRS)
Boss, Alan P.; Cameron, A. G. W.; Benz, W.
1991-01-01
Roche showed that equilibrium is impossible for a small fluid body synchronously orbiting a primary within a critical radius now termed the Roche limit. Tidal disruption of orbitally unbound bodies is a potentially important process for planetary formation through collisional accumulation, because the area of the Roche limit is considerably larger then the physical cross section of a protoplanet. Several previous studies were made of dynamical tidal disruption and different models of disruption were proposed. Because of the limitation of these analytical models, we have used a smoothed particle hydrodynamics (SPH) code to model the tidal disruption process. The code is basically the same as the one used to model giant impacts; we simply choose impact parameters large enough to avoid collisions. The primary and secondary both have iron cores and silicate mantles, and are initially isothermal at a molten temperature. The conclusions based on the analytical and numerical models are summarized.
Singular Spectrum Analysis for Astronomical Time Series: Constructing a Parsimonious Hypothesis Test
NASA Astrophysics Data System (ADS)
Greco, G.; Kondrashov, D.; Kobayashi, S.; Ghil, M.; Branchesi, M.; Guidorzi, C.; Stratta, G.; Ciszak, M.; Marino, F.; Ortolan, A.
We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with 1/f β power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.
Seasonal erosion and restoration of Mars' northern polar dunes
Hansen, C.J.; Bourke, M.; Bridges, N.T.; Byrne, S.; Colon, C.; Diniega, S.; Dundas, C.; Herkenhoff, K.; McEwen, A.; Mellon, M.; Portyankina, G.; Thomas, N.
2011-01-01
Despite radically different environmental conditions, terrestrial and martian dunes bear a strong resemblance, indicating that the basic processes of saltation and grainfall (sand avalanching down the dune slipface) operate on both worlds. Here, we show that martian dunes are subject to an additional modification process not found on Earth: springtime sublimation of Mars' CO 2 seasonal polar caps. Numerous dunes in Mars' north polar region have experienced morphological changes within a Mars year, detected in images acquired by the High-Resolution Imaging Science Experiment on the Mars Reconnaissance Orbiter. Dunes show new alcoves, gullies, and dune apron extension. This is followed by remobilization of the fresh deposits by the wind, forming ripples and erasing gullies. The widespread nature of these rapid changes, and the pristine appearance of most dunes in the area, implicates active sand transport in the vast polar erg in Mars' current climate.
NASA Astrophysics Data System (ADS)
Zhao, J.; Wang, S.
2017-12-01
Gravity wave drag (GWD) is among the drivers of meridional overturning in the middle atmosphere, also known as the Brewer-Dobson Circulation, and of the quasi-biennial oscillation (QBO). The small spatial scales and complications due to wave breaking require their effects to be parameterised. GWD parameterizations are usually divided into two parts, orographic and non-orographic. The basic dynamical and physical processes of the middle atmosphere and the mechanism of the interactions between the troposphere and the middle atmosphere were studied in the frame of a general circulation model. The model for the troposphere was expanded to a global model considering middle atmosphere with the capability of describing the basic processes in the middle atmosphere and the troposphere-middle atmosphere interactions. Currently, it is too costly to include full non-hydrostatic and rotational wave dynamics in an operational parameterization. The hydrostatic non-rotational wave dynamics which allow an efficient implementation that is suitably fast for operation. The simplified parameterization of non-orographic GWD follows from the WM96 scheme in which a framework is developed using conservative propagation of gravity waves, critical level filtering, and non-linear dissipation. In order to simulate and analysis the influence of non-orographic GWD on the stratospheric wind and temperature fields, experiments using Stratospheric Sudden Warming (SSW) event case occurred in January 2013 were carried out, and results of objective weather forecast verifications of the two months period were compared in detail. The verification of monthly mean of forecast anomaly correlation (ACC) and root mean square (RMS) errors shows consistently positive impact of non-orographic GWD on skill score of forecasting for the three to eight days, both in the stratosphere and troposphere, and visible positive impact on prediction of the stratospheric wind and temperature fields. Numerical simulation during SSW event demonstrates that the influence on the temperature of middle stratosphere is mainly positive and there were larger departure both for the wind and temperature fields considering the non-orographic GWD during the warming process.
Analysis of the mechanics and deformation characteristics of optical fiber acceleration sensor
NASA Astrophysics Data System (ADS)
Liu, Zong-kai; Bo, Yu-ming; Zhou, Ben-mou; Wang, Jun; Huang, Ya-dong
2016-10-01
The optical fiber sensor holds many advantages such as smaller volume, lighter weight, higher sensitivity, and stronger anti-interference ability, etc. It can be applied to oil exploration to improve the exploration efficiency, since the underground petroleum distribution can be obtained by detecting and analyzing the echo signals. In this paper, the cantilever beam optical fiber sensor was mainly investigated. Specifically, the finite element analysis method is applied to the numerical analysis of the changes and relations of the optical fiber rail slot elongation on the surface of the PC material fiber winding plate along with the changes of time and power under the action of sine force. The analysis results show that, when the upper and lower quality blocks are under the action of sine force, the cantilever beam optical fiber sensor structure can basically produce synchronized deformation along with the force. And the optical fiber elongation length basically has a linear relationship with the sine force within the time ranges of 0.2 0.4 and 0.6 0.8, which would be beneficial for the subsequent signal acquisition and data processing.
The average solar wind in the inner heliosphere: Structures and slow variations
NASA Technical Reports Server (NTRS)
Schwenn, R.
1983-01-01
Measurements from the HELIOS solar probes indicated that apart from solar activity related disturbances there exist two states of the solar wind which might result from basic differences in the acceleration process: the fast solar wind (v 600 kms(-)1) emanating from magnetically open regions in the solar corona and the "slow" solar wind (v 400 kms(-)1) correlated with the more active regions and its mainly closed magnetic structures. In a comprehensive study using all HELIOS data taken between 1974 and 1982 the average behavior of the basic plasma parameters were analyzed as functions of the solar wind speed. The long term variations of the solar wind parameters along the solar cycle were also determined and numerical estimates given. These modulations appear to be distinct though only minor. In agreement with earlier studies it was concluded that the major modulations are in the number and size of high speed streams and in the number of interplanetary shock waves caused by coronal transients. The latter ones usually cause huge deviations from the averages of all parameters.
Density and pressure variability in the mesosphere and thermosphere
NASA Technical Reports Server (NTRS)
Davis, T. M.
1986-01-01
In an effort to isolate the essential physics of the mesosphere and the thermosphere, a steady one-dimensional density and pressure model has been developed in support of related NASA activities, i.e., projects such as the AOTV and the Space Station. The model incorporates a zeroth order basic state including both the three-dimensional wind field and its associated shear structure, etc. A first order wave field is also incorporated in period bands ranging from about one second to one day. Both basic state and perturbation quantities satsify the combined constraints of mass, linear momentum and energy conservation on the midlatitude beta plane. A numerical (iterative) technique is used to solve for the vertical wind which is coupled to the density and pressure fields. The temperature structure from 1 to 1000 km and the lower boundary conditions are specified using the U.S. Standard Atmosphere 1976. Vertical winds are initialized at the top of the Planetary Boundary Layer using Ekman pumping values over flat terrain. The model also allows for the generation of waves during the geostrophic adjustment process and incorporates wave nonlinearity effects.
Efficient numerical modeling of the cornea, and applications
NASA Astrophysics Data System (ADS)
Gonzalez, L.; Navarro, Rafael M.; Hdez-Matamoros, J. L.
2004-10-01
Corneal topography has shown to be an essential tool in the ophthalmology clinic both in diagnosis and custom treatments (refractive surgery, keratoplastia), having also a strong potential in optometry. The post processing and analysis of corneal elevation, or local curvature data, is a necessary step to refine the data and also to extract relevant information for the clinician. In this context a parametric cornea model is proposed consisting of a surface described mathematically by two terms: one general ellipsoid corresponding to a regular base surface, expressed by a general quadric term located at an arbitrary position and free orientation in 3D space and a second term, described by a Zernike polynomial expansion, which accounts for irregularities and departures from the basic geometry. The model has been validated obtaining better adjustment of experimental data than other previous models. Among other potential applications, here we present the determination of the optical axis of the cornea by transforming the general quadric to its canonical form. This has permitted us to perform 3D registration of corneal topographical maps to improve the signal-to-noise ratio. Other basic and clinical applications are also explored.
NASA Astrophysics Data System (ADS)
Wu, X. L.; Xiang, X. H.; Wang, C. H.; Shao, Q. Q.
2012-04-01
Soil freezing occurs in winter in many parts of the world. The transfer of heat and moisture in freezing and thawing soil is interrelated, and this heat and moisture transport plays an important role in hydrological activity of seasonal frozen region especially for three rivers sources area of China. Soil freezing depth and ice content in frozen zone will significantly influence runoff and groundwater recharge. The purpose of this research is to develop a numerical model to simulate water and heat movement in the soil under freezing and thawing conditions. The basic elements of the model are the heat and water flow equations, which are heat conduction equation and unsaturated soil fluid mass conservation equation. A full-implicit finite volume scheme is used to solve the coupled equations in space. The model is calibrated and verified against the observed moisture and temperature of soil during freezing and thawing period from 2005 to 2007. Different characters of heat and moisture transfer are testified, such as frozen depth, temperature field of 40 cm depth and topsoil moisture content, et al. The model is calibrated and verified against observed value, which indicate that the new model can be used successfully to simulate numerically the coupled heat and mass transfer process in permafrost regions. By simulating the runoff generation process and the driven factors of seasonal changes, the agreement illustrates that the coupled model can be used to describe the local phonemes of hydrologic activities and provide a support to the local Ecosystem services. This research was supported by the National Natural Science Foundation of China (No. 51009045; 40930635; 41001011; 41101018; 51079038), the National Key Program for Developing Basic Science (No. 2009CB421105), the Fundamental Research Funds for the Central Universities (No. 2009B06614; 2010B00414), the National Non Profit Research Program of China (No. 200905013-8; 201101024; 20101224).
Breakup process of cylindrical viscous liquid specimens after a strong explosion in the core
NASA Astrophysics Data System (ADS)
Bang, B. H.; Ahn, C. S.; Kim, D. Y.; Lee, J. G.; Kim, H. M.; Jeong, J. T.; Yoon, W. S.; Al-Deyab, S. S.; Yoo, J. H.; Yoon, S. S.; Yarin, A. L.
2016-09-01
Basic understanding and theoretical description of the expansion and breakup of cylindrical specimens of Newtonian viscous liquid after an explosion of an explosive material in the core are aimed in this work along with the experimental investigation of the discovered phenomena. The unperturbed motion is considered first, and then supplemented by the perturbation growth pattern in the linear approximation. It is shown that a special non-trivial case of the Rayleigh-Taylor instability sets in being triggered by the gas pressure differential between the inner and outer surfaces of the specimens. The spectrum of the growing perturbation waves is established, as well as the growth rate found, and the debris sizes evaluated. An experimental study is undertaken and both the numerical and analytical solutions developed are compared with the experimental data. A good agreement between the theory and experiment is revealed. It is shown that the debris size λ, the parameter most important practically, scales with the explosion energy E as λ ˜ E-1/2. Another practically important parameter, the number of fingers N measured in the experiments was within 6%-9% from the values predicted numerically. Moreover, N in the experiments and numerical predictions followed the scaling law predicted theoretically, N ˜ me 1 / 2 , with me being the explosive mass.
Numerical modeling of the traction process in the treatment for Pierre-Robin Sequence.
Słowiński, Jakub J; Czarnecka, Aleksandra
2016-10-01
The goal of this numerical study was to identify the results of modulated growth simulation of the mandibular bone during traction in Pierre-Robin Sequence (PRS) treatment. Numerical simulation was conducted in the Ansys 16.2 environment. Two FEM (finite elements method) models of a newborn's mandible (a spatial and a flat model) were developed. The procedure simulated a 20-week traction period. The adopted growth measure was mandibular length increase, defined as the distance between the Co-Pog anatomic points used in cephalometric analysis. The simulation calculations conducted on the developed models showed that modulation had a significant influence on the pace of bone growth. In each of the analyzed cases, growth modulation resulted in an increase in pace. The largest value of increase was 6.91 mm. The modulated growth with the most beneficial load variant increased the basic value of the growth by as much as 24.6%, and growth with the least beneficial variant increased by 7.4%. Traction is a simple, minimally invasive and inexpensive procedure. The proposed algorithm may enable the development of a helpful forecasting tool, which could be of real use to doctors working on Pierre-Robin Sequence and other mandibular deformations in children. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Geometric design and mechanical behavior of a deployable cylinder with Miura origami
NASA Astrophysics Data System (ADS)
Cai, Jianguo; Deng, Xiaowei; Feng, Jian; Zhou, Ya
2015-12-01
The folding and deployment of a cylinder with Miura origami patterns are studied in this paper. First, the geometric formulation of the design problem is discussed. Then the loading case of the axial strains and corresponding external nodal loads applied on the vertices of the top polygon during the motion is investigated analytically. The influence of the angle between the diagonal and horizontal fold lines α and β and the number of Miura origami elements n on the dynamic behavior of the basic segment is also discussed. Then the dynamic behavior is analyzed using numerical simulations. Finally, the deployment process of a cylinder with multi-stories is discussed. The numerical results agree well with the analytical predictions. The results show that the range of motion, i.e. the maximal displacement of top nodes, will also increase with the increase of angles α and β. This cylinder, with a smaller n, may have a bistable behavior. When n is larger, the influence of n on the axial strains and external nodal loads is slight. The numerical results agree well with the analytical predictions. Moreover, the deployment of the cylinder with multi-stories is non-uniform, which deploys from the upper story to the lower story.
Fractional calculus phenomenology in two-dimensional plasma models
NASA Astrophysics Data System (ADS)
Gustafson, Kyle; Del Castillo Negrete, Diego; Dorland, Bill
2006-10-01
Transport processes in confined plasmas for fusion experiments, such as ITER, are not well-understood at the basic level of fully nonlinear, three-dimensional kinetic physics. Turbulent transport is invoked to describe the observed levels in tokamaks, which are orders of magnitude greater than the theoretical predictions. Recent results show the ability of a non-diffusive transport model to describe numerical observations of turbulent transport. For example, resistive MHD modeling of tracer particle transport in pressure-gradient driven turbulence for a three-dimensional plasma reveals that the superdiffusive (2̂˜t^α where α> 1) radial transport in this system is described quantitatively by a fractional diffusion equation Fractional calculus is a generalization involving integro-differential operators, which naturally describe non-local behaviors. Our previous work showed the quantitative agreement of special fractional diffusion equation solutions with numerical tracer particle flows in time-dependent linearized dynamics of the Hasegawa-Mima equation (for poloidal transport in a two-dimensional cold-ion plasma). In pursuit of a fractional diffusion model for transport in a gyrokinetic plasma, we now present numerical results from tracer particle transport in the nonlinear Hasegawa-Mima equation and a planar gyrokinetic model. Finite Larmor radius effects will be discussed. D. del Castillo Negrete, et al, Phys. Rev. Lett. 94, 065003 (2005).
Hydraulic fracturing - an attempt of DEM simulation
NASA Astrophysics Data System (ADS)
Kosmala, Alicja; Foltyn, Natalia; Klejment, Piotr; Dębski, Wojciech
2017-04-01
Hydraulic fracturing is a technique widely used in oil, gas and unconventional reservoirs exploitation in order to enable the oil/gas to flow more easily and enhance the production. It relays on pumping into a rock a special fluid under a high pressure which creates a set of microcracks which enhance porosity of the reservoir rock. In this research, attempt of simulation of such hydrofracturing process using the Discrete Element Method approach is presented. The basic assumption of this approach is that the rock can be represented as an assembly of discrete particles cemented into a rigid sample (Potyondy 2004). An existence of voids among particles simulates then a pore system which can be filled out by fracturing fluid, numerically represented by much smaller particles. Following this microscopic point of view and its numerical representation by DEM method we present primary results of numerical analysis of hydrofracturing phenomena, using the ESyS-Particle Software. In particular, we consider what is happening in distinct vicinity of the border between rock sample and fracking particles, how cracks are creating and evolving by breaking bonds between particles, how acoustic/seismic energy is releasing and so on. D.O. Potyondy, P.A. Cundall. A bonded-particle model for rock. International Journal of Rock Mechanics and Mining Sciences, 41 (2004), pp. 1329-1364.
NASA Technical Reports Server (NTRS)
King, J. C.
1975-01-01
The general orbit-coverage problem in a simplified physical model is investigated by application of numerical approaches derived from basic number theory. A system of basic and general properties is defined by which idealized periodic coverage patterns may be characterized, classified, and delineated. The principal common features of these coverage patterns are their longitudinal quantization, determined by the revolution number R, and their overall symmetry.
Research and technology, fiscal year 1983
NASA Technical Reports Server (NTRS)
1983-01-01
The responibilities and programs of the Goddard Space Flight Center are ranged from basic research in the space and Earth sciences through the management of numerous flight projects to operational responsibility for the tracking of and data acquisition from NASA's Earth orbiting satellites, Progress in the areas of spacecraft technology, sensor development and data system development, as well as in the basic and applied to research in the space and Earth sciences that they support is highlighted.
On the biomechanical analysis of the calories expended in a straight boxing jab
2017-01-01
Boxing and related sports activities have become a standard workout regime at many fitness studios worldwide. Oftentimes, people are interested in the calories expended during these workouts. This note focuses on determining the calories in a boxer's jab, using kinematic vector-loop relations and basic work–energy principles. Numerical simulations are undertaken to illustrate the basic model. Multi-limb extensions of the model are also discussed. PMID:28404871
KEY ISSUES REVIEW: Insights from simulations of star formation
NASA Astrophysics Data System (ADS)
Larson, Richard B.
2007-03-01
Although the basic physics of star formation is classical, numerical simulations have yielded essential insights into how stars form. They show that star formation is a highly nonuniform runaway process characterized by the emergence of nearly singular peaks in density, followed by the accretional growth of embryo stars that form at these density peaks. Circumstellar discs often form from the gas being accreted by the forming stars, and accretion from these discs may be episodic, driven by gravitational instabilities or by protostellar interactions. Star-forming clouds typically develop filamentary structures, which may, along with the thermal physics, play an important role in the origin of stellar masses because of the sensitivity of filament fragmentation to temperature variations. Simulations of the formation of star clusters show that the most massive stars form by continuing accretion in the dense cluster cores, and this again is a runaway process that couples star formation and cluster formation. Star-forming clouds also tend to develop hierarchical structures, and smaller groups of forming objects tend to merge into progressively larger ones, a generic feature of self-gravitating systems that is common to star formation and galaxy formation. Because of the large range of scales and the complex dynamics involved, analytic models cannot adequately describe many aspects of star formation, and detailed numerical simulations are needed to advance our understanding of the subject. 'The purpose of computing is insight, not numbers.' Richard W Hamming, in Numerical Methods for Scientists and Engineers (1962) 'There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.' William Shakespeare, in Hamlet, Prince of Denmark (1604)
Verification of Gyrokinetic codes: theoretical background and applications
NASA Astrophysics Data System (ADS)
Tronko, Natalia
2016-10-01
In fusion plasmas the strong magnetic field allows the fast gyro motion to be systematically removed from the description of the dynamics, resulting in a considerable model simplification and gain of computational time. Nowadays, the gyrokinetic (GK) codes play a major role in the understanding of the development and the saturation of turbulence and in the prediction of the consequent transport. We present a new and generic theoretical framework and specific numerical applications to test the validity and the domain of applicability of existing GK codes. For a sound verification process, the underlying theoretical GK model and the numerical scheme must be considered at the same time, which makes this approach pioneering. At the analytical level, the main novelty consists in using advanced mathematical tools such as variational formulation of dynamics for systematization of basic GK code's equations to access the limits of their applicability. The indirect verification of numerical scheme is proposed via the Benchmark process. In this work, specific examples of code verification are presented for two GK codes: the multi-species electromagnetic ORB5 (PIC), and the radially global version of GENE (Eulerian). The proposed methodology can be applied to any existing GK code. We establish a hierarchy of reduced GK Vlasov-Maxwell equations using the generic variational formulation. Then, we derive and include the models implemented in ORB5 and GENE inside this hierarchy. At the computational level, detailed verification of global electromagnetic test cases based on the CYCLONE are considered, including a parametric β-scan covering the transition between the ITG to KBM and the spectral properties at the nominal β value.
Secular resonances. [of asteroidal dynamics
NASA Technical Reports Server (NTRS)
Scholl, H.; Froeschle, CH.; Kinoshita, H.; Yoshikawa, M.; Williams, J. G.
1989-01-01
Theories and numerical experiments regarding secular resonances are reviewed. The basic dynamics and the positions of secular resonances are discussed, and secular perturbation theories for the nu16 resonance case, the nu6 resonance, and the nu5 resonance are addressed. What numerical experiments have revealed about asteroids located in secular resonances, the stability of secular resonances, variations of eccentricities and inclinations, and chaotic orbits is considered. Resonant transport of meteorites is discussed.
Numerical approach to optimal portfolio in a power utility regime-switching model
NASA Astrophysics Data System (ADS)
Gyulov, Tihomir B.; Koleva, Miglena N.; Vulkov, Lubin G.
2017-12-01
We consider a system of weakly coupled degenerate semi-linear parabolic equations of optimal portfolio in a regime-switching with power utility function, derived by A.R. Valdez and T. Vargiolu [14]. First, we discuss some basic properties of the solution of this system. Then, we develop and analyze implicit-explicit, flux limited finite difference schemes for the differential problem. Numerical experiments are discussed.
Numerical analysis of ion wind flow using space charge for optimal design
NASA Astrophysics Data System (ADS)
Ko, Han Seo; Shin, Dong Ho; Baek, Soo Hong
2014-11-01
Ion wind flow has been widly studied for its advantages of a micro fluidic device. However, it is very difficult to predict the performance of the ion wind flow for various conditions because of its complicated electrohydrodynamic phenomena. Thus, a reliable numerical modeling is required to design an otimal ion wind generator and calculate velocity of the ion wind for the proper performance. In this study, the numerical modeling of the ion wind has been modified and newly defined to calculate the veloctiy of the ion wind flow by combining three basic models such as electrostatics, electrodynamics and fluid dynamics. The model has included presence of initial space charges to calculate transfer energy between space charges and air gas molecules using a developed space charge correlation. The simulation has been performed for a geometry of a pin to parallel plate electrode. Finally, the results of the simulation have been compared with the experimental data for the ion wind velocity to confirm the accuracy of the modified numerical modeling and to obtain the optimal design of the ion wind generator. This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. 2013R1A2A2A01068653).
NASA Astrophysics Data System (ADS)
Röbke, B. R.; Vött, A.
2017-12-01
With human activity increasingly concentrating on coasts, tsunamis (from Japanese tsu = harbour, nami = wave) are a major natural hazard to today's society. Stimulated by disastrous tsunami impacts in recent years, for instance in south-east Asia (2004) or in Japan (2011), tsunami science has significantly flourished, which has brought great advances in hazard assessment and mitigation plans. Based on tsunami research of the last decades, this paper provides a thorough treatise on the tsunami phenomenon from a geoscientific point of view. Starting with the wave features, tsunamis are introduced as long shallow water waves or wave trains crossing entire oceans without major energy loss. At the coast, tsunamis typically show wave shoaling, funnelling and resonance effects as well as a significant run-up and backflow. Tsunami waves are caused by a sudden displacement of the water column due to a number of various trigger mechanisms. Such are earthquakes as the main trigger, submarine and subaerial mass wastings, volcanic activity, atmospheric disturbances (meteotsunamis) and cosmic impacts, as is demonstrated by giving corresponding examples from the past. Tsunamis are known to have a significant sedimentary and geomorphological off- and onshore response. So-called tsunamites form allochthonous high-energy deposits that are left at the coast during tsunami landfall. Tsunami deposits show typical sedimentary features, as basal erosional unconformities, fining-upward and -landward, a high content of marine fossils, rip-up clasts from underlying units and mud caps, all reflecting the hydrodynamic processes during inundation. The on- and offshore behaviour of tsunamis and related sedimentary processes can be simulated using hydro- and morphodynamic numerical models. The paper provides an overview of the basic tsunami modelling techniques, including discretisation, guidelines for appropriate temporal and spatial resolution as well as the nesting method. Furthermore, the Boussinesq approximation-a simplification of the three-dimensional Navier-Stokes equations-is presented as a basic theory behind numerical tsunami models, which adequately reflects the non-linear, dispersive wave behaviour of tsunamis. The fully non-linear Boussinesq equations allow the simulation of tsunamis e.g. in the form of N-waves. Based on the various subtopics presented, recommendations for future multidisciplinary tsunami research are made. It is especially discussed how the combination of sedimentary and geomorphological tsunami field traces and numerical modelling techniques can contribute to derive locally relevant tsunami sources and to improve the assessment of tsunami hazards considering the individual pre-/history and physiogeographical setting of a specific region.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de
2016-02-15
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methodsmore » are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.« less
A Scenario-Based Process for Requirements Development: Application to Mission Operations Systems
NASA Technical Reports Server (NTRS)
Bindschadler, Duane L.; Boyles, Carole A.
2008-01-01
The notion of using operational scenarios as part of requirements development during mission formulation (Phases A & B) is widely accepted as good system engineering practice. In the context of developing a Mission Operations System (MOS), there are numerous practical challenges to translating that notion into the cost-effective development of a useful set of requirements. These challenges can include such issues as a lack of Project-level focus on operations issues, insufficient or improper flowdown of requirements, flowdown of immature or poor-quality requirements from Project level, and MOS resource constraints (personnel expertise and/or dollars). System engineering theory must be translated into a practice that provides enough structure and standards to serve as guidance, but that retains sufficient flexibility to be tailored to the needs and constraints of a particular MOS or Project. We describe a detailed, scenario-based process for requirements development. Identifying a set of attributes for high quality requirements, we show how the portions of the process address many of those attributes. We also find that the basic process steps are robust, and can be effective even in challenging Project environments.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
NASA Astrophysics Data System (ADS)
Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.
2016-02-01
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
A thermodynamic approach to obtain materials properties for engineering applications
NASA Technical Reports Server (NTRS)
Chang, Y. Austin
1993-01-01
With the ever increases in the capabilities of computers for numerical computations, we are on the verge of using these tools to model manufacturing processes for improving the efficiency of these processes as well as the quality of the products. One such process is casting for the production of metals. However, in order to model metal casting processes in a meaningful way it is essential to have the basic properties of these materials in their molten state, solid state as well as in the mixed state of solid and liquid. Some of the properties needed may be considered as intrinsic such as the density, heat capacity or enthalpy of freezing of a pure metal, while others are not. For instance, the enthalpy of solidification of an alloy is not a defined thermodynamic quantity. Its value depends on the micro-segregation of the phases during the course of solidification. The objective of the present study is to present a thermodynamic approach to obtain some of the intrinsic properties and combining thermodynamics with kinetic models to estimate such quantities as the enthalpy of solidification of an alloy.
Membrane Fluidity Changes, A Basic Mechanism of Interaction of Gravity with Cells?
NASA Astrophysics Data System (ADS)
Kohn, Florian; Hauslage, Jens; Hanke, Wolfgang
2017-10-01
All life on earth has been established under conditions of stable gravity of 1g. Nevertheless, in numerous experiments the direct gravity dependence of biological processes has been shown on all levels of organization, from single molecules to humans. According to the underlying mechanisms a variety of questions, especially about gravity sensation of single cells without specialized organelles or structures for gravity sensing is being still open. Biological cell membranes are complex structures containing mainly lipids and proteins. Functional aspects of such membranes are usually attributed to membrane integral proteins. This is also correct for the gravity dependence of cells and organisms which is well accepted since long for a wide range of biological systems. However, it is as well established that parameters of the lipid matrix are directly modifying the function of proteins. Thus, the question must be asked, whether, and how far plain lipid membranes are affected by gravity directly. In principle it can be said that up to recently no real basic mechanism for gravity perception in single cells has been presented or verified. However, it now has been shown that as a basic membrane parameter, membrane fluidity, is significantly dependent on gravity. This finding might deliver a real basic mechanism for gravity perception of living organisms on all scales. In this review we summarize older and more recent results to demonstrate that the finding of membrane fluidity being gravity dependent is consistent with a variety of published laboratory experiments. We additionally point out to the consequences of these recent results for research in the field life science under space condition.
Carrying Capacity and Colonization Dynamics of Curvibacter in the Hydra Host Habitat
Wein, Tanita; Dagan, Tal; Fraune, Sebastian; Bosch, Thomas C. G.; Reusch, Thorsten B. H.; Hülter, Nils F.
2018-01-01
Most eukaryotic species are colonized by a microbial community – the microbiota – that is acquired during early life stages and is critical to host development and health. Much research has focused on the microbiota biodiversity during the host life, however, empirical data on the basic ecological principles that govern microbiota assembly is lacking. Here we quantify the contribution of colonizer order, arrival time and colonization history to microbiota assembly on a host. We established the freshwater polyp Hydra vulgaris and its dominant colonizer Curvibacter as a model system that enables the visualization and quantification of colonizer population size at the single cell resolution, in vivo, in real time. We estimate the carrying capacity of a single Hydra polyp as 2 × 105 Curvibacter cells, which is robust among individuals and time. Colonization experiments reveal a clear priority effect of first colonizers that depends on arrival time and colonization history. First arriving colonizers achieve a numerical advantage over secondary colonizers within a short time lag of 24 h. Furthermore, colonizers primed for the Hydra habitat achieve a numerical advantage in the absence of a time lag. These results follow the theoretical expectations for any bacterial habitat with a finite carrying capacity. Thus, Hydra colonization and succession processes are largely determined by the habitat occupancy over time and Curvibacter colonization history. Our experiments provide empirical data on the basic steps of host-associated microbiota establishment – the colonization stage. The presented approach supplies a framework for studying habitat characteristics and colonization dynamics within the host–microbe setting. PMID:29593687
NASA Astrophysics Data System (ADS)
Yang, L. M.; Shu, C.; Wang, Y.; Sun, Y.
2016-08-01
The sphere function-based gas kinetic scheme (GKS), which was presented by Shu and his coworkers [23] for simulation of inviscid compressible flows, is extended to simulate 3D viscous incompressible and compressible flows in this work. Firstly, we use certain discrete points to represent the spherical surface in the phase velocity space. Then, integrals along the spherical surface for conservation forms of moments, which are needed to recover 3D Navier-Stokes equations, are approximated by integral quadrature. The basic requirement is that these conservation forms of moments can be exactly satisfied by weighted summation of distribution functions at discrete points. It was found that the integral quadrature by eight discrete points on the spherical surface, which forms the D3Q8 discrete velocity model, can exactly match the integral. In this way, the conservative variables and numerical fluxes can be computed by weighted summation of distribution functions at eight discrete points. That is, the application of complicated formulations resultant from integrals can be replaced by a simple solution process. Several numerical examples including laminar flat plate boundary layer, 3D lid-driven cavity flow, steady flow through a 90° bending square duct, transonic flow around DPW-W1 wing and supersonic flow around NACA0012 airfoil are chosen to validate the proposed scheme. Numerical results demonstrate that the present scheme can provide reasonable numerical results for 3D viscous flows.
Small Group Activities for Introductory Business Classes.
ERIC Educational Resources Information Center
Mundrake, George
1999-01-01
Describes numerous small-group activities for the following areas of basic business education: consumer credit, marketing, business organization, entrepreneurship, insurance, risk management, economics, personal finance, business careers, global markets, and government regulation. (SK)
NASA Technical Reports Server (NTRS)
Abbas, M. M.; Craven, P. D.; Spann, J. F.; Tankosic, D.; LeClair, A.; Gallagher, D. L.; West, E. A.; Weingartner, J. C.; Witherow, W. K.; Tielens, A. G. G. M.
2004-01-01
The processes and mechanisms involved in the rotation and alignment of interstellar dust grains have been of great interest in astrophysics ever since the surprising discovery of the polarization of starlight more than half a century ago. Numerous theories, detailed mathematical models and numerical studies of grain rotation and alignment with respect to the Galactic magnetic field have been presented in the literature. In particular, the subject of grain rotation and alignment by radiative torques has been shown to be of particular interest in recent years. However, despite many investigations, a satisfactory theoretical understanding of the processes involved in grain rotation and alignment has not been achieved. As there appears to be no experimental data available on this subject, we have carried out some unique experiments to illuminate the processes involved in rotation of dust grains in the interstellar medium. In this paper we present the results of some preliminary laboratory experiments on the rotation of individual micron/submicron size nonspherical dust grains levitated in an electrodynamic balance evacuated to pressures of approximately 10(exp -3) to 10(exp -5) torr. The particles are illuminated by laser light at 5320 Angstroms, and the grain rotation rates are obtained by analyzing the low frequency (approximately 0-100 kHz) signal of the scattered light detected by a photodiode detector. The rotation rates are compared with simple theoretical models to retrieve some basic rotational parameters. The results are examined in the light of the current theories of alignment.
Laboratory Experiments on Rotation of Micron Size Cosmic Dust Grains with Radiation
NASA Technical Reports Server (NTRS)
Abbas, M. M.; Craven, P. D.; Spann, J. F.; Tankosic, D.; LeClair, A.; Gallagher, D. L.; West, E.; Weingartner, J.; Witherow, W. K.
2004-01-01
The processes and mechanisms involved in the rotation and alignment of interstellar dust grains have been of great interest in astrophysics ever since the surprising discovery of the polarization of starlight more than half a century ago. Numerous theories, detailed mathematical models and numerical studies of grain rotation and alignment along the Galactic magnetic field have been presented in the literature. In particular, the subject of grain rotation and alignment by radiative torques has been shown to be of particular interest in recent years. However, despite many investigations, a satisfactory theoretical understanding of the processes involved in grain rotation and alignment has not been achieved. As there appears to be no experimental data available on this subject, we have carried out some unique experiments to illuminate the processes involved in rotation of dust grains in the interstellar medium. In this paper we present the results of some preliminary laboratory experiments on the rotation of individual micron/submicron size nonspherical dust grains levitated in an electrodynamic balance evacuated to pressures of approx. 10(exp -3) to 10(exp -5) torr. The particles are illuminated by laser light at 5320 A, and the grain rotation rates are obtained by analyzing the low frequency (approx. 0-100 kHz) signal of the scattered light detected by a photodiode detector. The rotation rates are compared with simple theoretical models to retrieve some basic rotational parameters. The results are examined in the light of the current theories of alignment.
Numerical studies of interacting vortices
NASA Technical Reports Server (NTRS)
Liu, G. C.; Hsu, C. H.
1985-01-01
To get a basic understanding of the physics of flowfields modeled by vortex filaments with finite vortical cores, systematic numerical studies of the interactions of two dimensional vortices and pairs of coaxial axisymmetric circular vortex rings were made. Finite difference solutions of the unsteady incompressible Navier-Stokes equations were carried out using vorticity and stream function as primary variables. Special emphasis was placed on the formulation of appropriate boundary conditions necessary for the calculations in a finite computational domain. Numerical results illustrate the interaction of vortex filaments, demonstrate when and how they merge with each other, and establish the region of validity for an asymptotic analysis.
Improvement and speed optimization of numerical tsunami modelling program using OpenMP technology
NASA Astrophysics Data System (ADS)
Chernov, A.; Zaytsev, A.; Yalciner, A.; Kurkin, A.
2009-04-01
Currently, the basic problem of tsunami modeling is low speed of calculations which is unacceptable for services of the operative notification. Existing algorithms of numerical modeling of hydrodynamic processes of tsunami waves are developed without taking the opportunities of modern computer facilities. There is an opportunity to have considerable acceleration of process of calculations by using parallel algorithms. We discuss here new approach to parallelization tsunami modeling code using OpenMP Technology (for multiprocessing systems with the general memory). Nowadays, multiprocessing systems are easily accessible for everyone. The cost of the use of such systems becomes much lower comparing to the costs of clusters. This opportunity also benefits all programmers to apply multithreading algorithms on desktop computers of researchers. Other important advantage of the given approach is the mechanism of the general memory - there is no necessity to send data on slow networks (for example Ethernet). All memory is the common for all computing processes; it causes almost linear scalability of the program and processes. In the new version of NAMI DANCE using OpenMP technology and multi-threading algorithm provide 80% gain in speed in comparison with the one-thread version for dual-processor unit. The speed increased and 320% gain was attained for four core processor unit of PCs. Thus, it was possible to reduce considerably time of performance of calculations on the scientific workstations (desktops) without complete change of the program and user interfaces. The further modernization of algorithms of preparation of initial data and processing of results using OpenMP looks reasonable. The final version of NAMI DANCE with the increased computational speed can be used not only for research purposes but also in real time Tsunami Warning Systems.
1988-11-01
rates.6 The Hammet equation , also called the Linear Free Energy Relationship (LFER) because of the relationship of the Gibb’s Free Energy to the... equations for numerous biological and physicochemical properties. Linear Solvation Enery Relationship (LSER), a sub-set of QSAR have been used by...originates from thermodynamics, where Hammet recognized the relationship of structure to the Gibb’s Free Energy, and ultimately to equilibria and reaction
On the biomechanical analysis of the calories expended in a straight boxing jab.
Zohdi, T I
2017-04-01
Boxing and related sports activities have become a standard workout regime at many fitness studios worldwide. Oftentimes, people are interested in the calories expended during these workouts. This note focuses on determining the calories in a boxer's jab, using kinematic vector-loop relations and basic work-energy principles. Numerical simulations are undertaken to illustrate the basic model. Multi-limb extensions of the model are also discussed. © 2017 The Author(s).
Conceptual Comparison of Population Based Metaheuristics for Engineering Problems
Green, Paul
2015-01-01
Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes. PMID:25874265
Conceptual comparison of population based metaheuristics for engineering problems.
Adekanmbi, Oluwole; Green, Paul
2015-01-01
Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.
Craig, Sandra
2011-01-01
Carbohydrates in various forms play a vital role in numerous critical biological processes. The detection of such saccharides can give insight into the progression of such diseases such as cancer. Boronic acids react with 1,2 and 1,3 diols of saccharides in non-aqueous or basic aqueous media. Herein, we describe the design, synthesis and evaluation of three bisboronic acid fluorescent probes, each having about ten linear steps in its synthesis. Among these compounds that were evaluated, 9b was shown to selectively label HepG2, liver carcinoma cell line within a concentration range of 0.5–10 μM in comparison to COS-7, a normal fibroblast cell line. PMID:22177855
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Ďuračkoá, Daniela
2014-01-01
The paper describes our experiment with using the Gaussian mixture models (GMM) for classification of speech uttered by a person wearing orthodontic appliances. For the GMM classification, the input feature vectors comprise the basic and the complementary spectral properties as well as the supra-segmental parameters. Dependence of classification correctness on the number of the parameters in the input feature vector and on the computation complexity is also evaluated. In addition, an influence of the initial setting of the parameters for GMM training process was analyzed. Obtained recognition results are compared visually in the form of graphs as well as numerically in the form of tables and confusion matrices for tested sentences uttered using three configurations of orthodontic appliances.
Numerical Investigation of the Formation of a Convective Column and a Fire Tornado by Forest Fires
NASA Astrophysics Data System (ADS)
Grishin, A. M.; Matvienko, O. V.
2014-09-01
Computational modeling of the formation of a convective column by forest fires has been carried out. It has been established that in the case of an unstable atmosphere stratification the basic factor influencing the thermal column formation is the intensification of the processes of turbulent mixing and that at a stable atmosphere stratification a more significant factor determining the convective column formation is the action of the buoyancy force. It has been shown that a swirling flow in the convective column is formed due to the appearance of a tangential velocity component as a consequence of the local circulation arising against the background of large-scale motion owing to the thermal and orographic inhomogeneities of the underlying surface.
ADS's Dexter Data Extraction Applet
NASA Astrophysics Data System (ADS)
Demleitner, M.; Accomazzi, A.; Eichhorn, G.; Grant, C. S.; Kurtz, M. J.; Murray, S. S.
The NASA Astrophysics Data System (ADS) now holds 1.3 million scanned pages, containing numerous plots and figures for which the original data sets are lost or inaccessible. The availability of scans of the figures can significantly ease the regeneration of the data sets. For this purpose, the ADS has developed Dexter, a Java applet that supports the user in this process. Dexter's basic functionality is to let the user manually digitize a plot by marking points and defining the coordinate transformation from the logical to the physical coordinate system. Advanced features include automatic identification of axes, tracing lines and finding points matching a template. This contribution both describes the operation of Dexter from a user's point of view and discusses some of the architectural issues we faced during implementation.
Epidemic spread on interconnected metapopulation networks
NASA Astrophysics Data System (ADS)
Wang, Bing; Tanaka, Gouhei; Suzuki, Hideyuki; Aihara, Kazuyuki
2014-09-01
Numerous real-world networks have been observed to interact with each other, resulting in interconnected networks that exhibit diverse, nontrivial behavior with dynamical processes. Here we investigate epidemic spreading on interconnected networks at the level of metapopulation. Through a mean-field approximation for a metapopulation model, we find that both the interaction network topology and the mobility probabilities between subnetworks jointly influence the epidemic spread. Depending on the interaction between subnetworks, proper controls of mobility can efficiently mitigate epidemics, whereas an extremely biased mobility to one subnetwork will typically cause a severe outbreak and promote the epidemic spreading. Our analysis provides a basic framework for better understanding of epidemic behavior in related transportation systems as well as for better control of epidemics by guiding human mobility patterns.
NASA Astrophysics Data System (ADS)
Prasad, D. V. V. Krishna; Chaitanya, G. S. Krishna; Raju, R. Srinivasa
2018-05-01
The nature of Casson fluid on MHD free convective flow of over an impulsively started infinite vertically inclined plate in presence of thermal diffusion (Soret), thermal radiation, heat and mass transfer effects is studied. The basic governing nonlinear coupled partial differential equations are solved numerically using finite element method. The relevant physical parameters appearing in velocity, temperature and concentration profiles are analyzed and discussed through graphs. Finally, the results for velocity profiles and the reduced Nusselt and Sherwood numbers are obtained and compared with previous results in the literature and are found to be in excellent agreement. Applications of the present study would be useful in magnetic material processing and chemical engineering systems.
BOREAS AFM-5 Level-2 Upper Air Network Standard Pressure Level Data
NASA Technical Reports Server (NTRS)
Barr, Alan; Hrynkiw, Charmaine; Hall, Forrest G. (Editor); Newcomer, Jeffrey A. (Editor); Smith, David E. (Technical Monitor)
2000-01-01
The BOREAS AFM-5 team collected and processed data from the numerous radiosonde flights during the project. The goals of the AFM-05 team were to provide large-scale definition of the atmosphere by supplementing the existing AES aerological network, both temporally and spatially. This data set includes basic upper-air parameters interpolated at 0.5 kiloPascal increments of atmospheric pressure from data collected from the network of upper-air stations during the 1993, 1994, and 1996 field campaigns over the entire study region. The data are contained in tabular ASCII files. The data files are available on a CD-ROM (see document number 20010000884) or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).
Production, depreciation and the size distribution of firms
NASA Astrophysics Data System (ADS)
Ma, Qi; Chen, Yongwang; Tong, Hui; Di, Zengru
2008-05-01
Many empirical researches indicate that firm size distributions in different industries or countries exhibit some similar characters. Among them the fact that many firm size distributions obey power-law especially for the upper end has been mostly discussed. Here we present an agent-based model to describe the evolution of manufacturing firms. Some basic economic behaviors are taken into account, which are production with decreasing marginal returns, preferential allocation of investments, and stochastic depreciation. The model gives a steady size distribution of firms which obey power-law. The effect of parameters on the power exponent is analyzed. The theoretical results are given based on both the Fokker-Planck equation and the Kesten process. They are well consistent with the numerical results.
Computer aided fixture design - A case based approach
NASA Astrophysics Data System (ADS)
Tanji, Shekhar; Raiker, Saiesh; Mathew, Arun Tom
2017-11-01
Automated fixture design plays important role in process planning and integration of CAD and CAM. An automated fixture setup design system is developed where when fixturing surfaces and points are described allowing modular fixture components to get automatically select for generating fixture units and placed into position with satisfying assembled conditions. In past, various knowledge based system have been developed to implement CAFD in practice. In this paper, to obtain an acceptable automated machining fixture design, a case-based reasoning method with developed retrieval system is proposed. Visual Basic (VB) programming language is used in integrating with SolidWorks API (Application programming interface) module for better retrieval procedure reducing computational time. These properties are incorporated in numerical simulation to determine the best fit for practical use.
Efficient transfer of an arbitrary qutrit state in circuit quantum electrodynamics.
Liu, Tong; Xiong, Shao-Jie; Cao, Xiao-Zhi; Su, Qi-Ping; Yang, Chui-Ping
2015-12-01
Compared with a qubit, a qutrit (i.e., three-level quantum system) has a larger Hilbert space and thus can be used to encode more information in quantum information processing and communication. Here, we propose a method to transfer an arbitrary quantum state between two flux qutrits coupled to two resonators. This scheme is simple because it only requires two basic operations. The state-transfer operation can be performed fast because only resonant interactions are used. Numerical simulations show that the high-fidelity transfer of quantum states between the two qutrits is feasible with current circuit-QED technology. This scheme is quite general and can be applied to accomplish the same task for other solid-state qutrits coupled to resonators.
Regulation of autophagy by amino acid availability in S. cerevisiae and mammalian cells.
Abeliovich, Hagai
2015-10-01
Autophagy is a catabolic membrane-trafficking process that occurs in all eukaryotic organisms analyzed to date. The study of autophagy has exploded over the last decade or so, branching into numerous aspects of cellular and organismal physiology. From basic functions in starvation and quality control, autophagy has expanded into innate immunity, aging, neurological diseases, redox regulation, and ciliogenesis, to name a few roles. In the present review, I would like to narrow the discussion to the more classical roles of autophagy in supporting viability under nutrient limitation. My aim is to provide a semblance of a historical overview, together with a concise, and perhaps subjective, mechanistic and functional analysis of the central questions in the autophagy field.
Diffusion in randomly perturbed dissipative dynamics
NASA Astrophysics Data System (ADS)
Rodrigues, Christian S.; Chechkin, Aleksei V.; de Moura, Alessandro P. S.; Grebogi, Celso; Klages, Rainer
2014-11-01
Dynamical systems having many coexisting attractors present interesting properties from both fundamental theoretical and modelling points of view. When such dynamics is under bounded random perturbations, the basins of attraction are no longer invariant and there is the possibility of transport among them. Here we introduce a basic theoretical setting which enables us to study this hopping process from the perspective of anomalous transport using the concept of a random dynamical system with holes. We apply it to a simple model by investigating the role of hyperbolicity for the transport among basins. We show numerically that our system exhibits non-Gaussian position distributions, power-law escape times, and subdiffusion. Our simulation results are reproduced consistently from stochastic continuous time random walk theory.
Single-digit arithmetic processing—anatomical evidence from statistical voxel-based lesion analysis
Mihulowicz, Urszula; Willmes, Klaus; Karnath, Hans-Otto; Klein, Elise
2014-01-01
Different specific mechanisms have been suggested for solving single-digit arithmetic operations. However, the neural correlates underlying basic arithmetic (multiplication, addition, subtraction) are still under debate. In the present study, we systematically assessed single-digit arithmetic in a group of acute stroke patients (n = 45) with circumscribed left- or right-hemispheric brain lesions. Lesion sites significantly related to impaired performance were found only in the left-hemisphere damaged (LHD) group. Deficits in multiplication and addition were related to subcortical/white matter brain regions differing from those for subtraction tasks, corroborating the notion of distinct processing pathways for different arithmetic tasks. Additionally, our results further point to the importance of investigating fiber pathways in numerical cognition. PMID:24847238
Smart Cameras for Remote Science Survey
NASA Technical Reports Server (NTRS)
Thompson, David R.; Abbey, William; Allwood, Abigail; Bekker, Dmitriy; Bornstein, Benjamin; Cabrol, Nathalie A.; Castano, Rebecca; Estlin, Tara; Fuchs, Thomas; Wagstaff, Kiri L.
2012-01-01
Communication with remote exploration spacecraft is often intermittent and bandwidth is highly constrained. Future missions could use onboard science data understanding to prioritize downlink of critical features [1], draft summary maps of visited terrain [2], or identify targets of opportunity for followup measurements [3]. We describe a generic approach to classify geologic surfaces for autonomous science operations, suitable for parallelized implementations in FPGA hardware. We map these surfaces with texture channels - distinctive numerical signatures that differentiate properties such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differential outcrop weathering. This work describes our basic image analysis approach and reports an initial performance evaluation using surface images from the Mars Exploration Rovers. Future work will incorporate these methods into camera hardware for real-time processing.
Rumor spreading model with the different attitudes towards rumors
NASA Astrophysics Data System (ADS)
Hu, Yuhan; Pan, Qiuhui; Hou, Wenbing; He, Mingfeng
2018-07-01
Rumor spreading has a profound influence on people's well-being and social stability. There are many factors influencing rumor spreading. In this paper, we recommended an assumption that among the common mass there are three attitudes towards rumors: to like rumor spreading, to dislike rumor spreading, and to be hesitant (or neutral) to rumor spreading. Based on such an assumption, a Susceptible-Hesitating-Affected-Resistant(SHAR) model is established, which considered individuals' different attitudes towards rumor spreading. We also analyzed the local and global stability of rumor-free equilibrium and rumor-existence equilibrium, calculated the basic reproduction number of our model. With numerical simulations, we illustrated the effect of parameter changes on rumor spreading, analyzing the parameter sensitivity of the model. The results of the theoretical analysis and numerical simulations illustrated the conclusions of this study. People having different attitudes towards rumors may play different roles in the process of rumor spreading. It was surprising to find, in our research, that people who hesitate to spread rumors have a positive effect on the spread of rumors.
NASA Astrophysics Data System (ADS)
Wieczorek, Piotr; Ligor, Magdalena; Buszewski, Bogusław
Electromigration techniques, including capillary electrophoresis (CE), are widely used for separation and identification of compounds present in food products. These techniques may also be considered as alternate and complementary with respect to commonly used analytical techniques, such as high-performance liquid chromatography (HPLC), or gas chromatography (GC). Applications of CE concern the determination of high-molecular compounds, like polyphenols, including flavonoids, pigments, vitamins, food additives (preservatives, antioxidants, sweeteners, artificial pigments) are presented. Also, the method developed for the determination of proteins and peptides composed of amino acids, which are basic components of food products, are studied. Other substances such as carbohydrates, nucleic acids, biogenic amines, natural toxins, and other contaminations including pesticides and antibiotics are discussed. The possibility of CE application in food control laboratories, where analysis of the composition of food and food products are conducted, is of great importance. CE technique may be used during the control of technological processes in the food industry and for the identification of numerous compounds present in food. Due to the numerous advantages of the CE technique it is successfully used in routine food analysis.
NASA Astrophysics Data System (ADS)
Lacaze, Guilhem; Oefelein, Joseph
2016-11-01
High-pressure flows are known to be challenging to simulate due to thermodynamic non-linearities occurring in the vicinity of the pseudo-boiling line. This study investigates the origin of this issue by analyzing the behavior of thermodynamic processes at elevated pressure and low temperature. We show that under transcritical conditions, non-linearities significantly amplify numerical errors associated with construction of fluxes. These errors affect the local density and energy balances, which in turn creates pressure oscillations. For that reason, solvers based on a conservative system of equations that transport density and total energy are subject to unphysical pressure variations in gradient regions. These perturbations hinder numerical stability and degrade the accuracy of predictions. To circumvent this problem, the governing system can be reformulated to a pressure-based treatment of energy. We present comparisons between the pressure-based and fully conservative formulations using a progressive set of canonical cases, including a cryogenic turbulent mixing layer at rocket engine conditions. Department of Energy, Office of Science, Basic Energy Sciences Program.
The Role of Computer Simulation in Nanoporous Metals—A Review
Xia, Re; Wu, Run Ni; Liu, Yi Lun; Sun, Xiao Yu
2015-01-01
Nanoporous metals (NPMs) have proven to be all-round candidates in versatile and diverse applications. In this decade, interest has grown in the fabrication, characterization and applications of these intriguing materials. Most existing reviews focus on the experimental and theoretical works rather than the numerical simulation. Actually, with numerous experiments and theory analysis, studies based on computer simulation, which may model complex microstructure in more realistic ways, play a key role in understanding and predicting the behaviors of NPMs. In this review, we present a comprehensive overview of the computer simulations of NPMs, which are prepared through chemical dealloying. Firstly, we summarize the various simulation approaches to preparation, processing, and the basic physical and chemical properties of NPMs. In this part, the emphasis is attached to works involving dealloying, coarsening and mechanical properties. Then, we conclude with the latest progress as well as the future challenges in simulation studies. We believe that highlighting the importance of simulations will help to better understand the properties of novel materials and help with new scientific research on these materials. PMID:28793491
DNS of Laminar-Turbulent Transition in Swept-Wing Boundary Layers
NASA Technical Reports Server (NTRS)
Duan, L.; Choudhari, M.; Li, F.
2014-01-01
Direct numerical simulation (DNS) is performed to examine laminar to turbulent transition due to high-frequency secondary instability of stationary crossflow vortices in a subsonic swept-wing boundary layer for a realistic natural-laminar-flow airfoil configuration. The secondary instability is introduced via inflow forcing and the mode selected for forcing corresponds to the most amplified secondary instability mode that, in this case, derives a majority of its growth from energy production mechanisms associated with the wall-normal shear of the stationary basic state. An inlet boundary condition is carefully designed to allow for accurate injection of instability wave modes and minimize acoustic reflections at numerical boundaries. Nonlinear parabolized stability equation (PSE) predictions compare well with the DNS in terms of modal amplitudes and modal shape during the strongly nonlinear phase of the secondary instability mode. During the transition process, the skin friction coefficient rises rather rapidly and the wall-shear distribution shows a sawtooth pattern that is analogous to the previously documented surface flow visualizations of transition due to stationary crossflow instability. Fully turbulent features are observed in the downstream region of the flow.
Numerical Studies of Boundary-Layer Receptivity
NASA Technical Reports Server (NTRS)
Reed, Helen L.
1995-01-01
Direct numerical simulations (DNS) of the acoustic receptivity process on a semi-infinite flat plate with a modified-super-elliptic (MSE) leading edge are performed. The incompressible Navier-Stokes equations are solved in stream-function/vorticity form in a general curvilinear coordinate system. The steady basic-state solution is found by solving the governing equations using an alternating direction implicit (ADI) procedure which takes advantage of the parallelism present in line-splitting techniques. Time-harmonic oscillations of the farfield velocity are applied as unsteady boundary conditions to the unsteady disturbance equations. An efficient time-harmonic scheme is used to produce the disturbance solutions. Buffer-zone techniques have been applied to eliminate wave reflection from the outflow boundary. The spatial evolution of Tollmien-Schlichting (T-S) waves is analyzed and compared with experiment and theory. The effects of nose-radius, frequency, Reynolds number, angle of attack, and amplitude of the acoustic wave are investigated. This work is being performed in conjunction with the experiments at the Arizona State University Unsteady Wind Tunnel under the direction of Professor William Saric. The simulations are of the same configuration and parameters used in the wind-tunnel experiments.
A deterministic Lagrangian particle separation-based method for advective-diffusion problems
NASA Astrophysics Data System (ADS)
Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.
2008-12-01
A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.
Division of Energy Biosciences annual report and summaries of FY 1996 activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-04-01
The mission of the Division of Energy Biosciences is to support research that advances the fundamental knowledge necessary for the future development of biotechnologies related to the Department of Energy`s mission. The departmental civilian objectives include effective and efficient energy production, energy conservation, environmental restoration, and waste management. The Energy Biosciences program emphasizes research in the microbiological and plant sciences, as these understudied areas offer numerous scientific opportunities to dramatically influence environmentally sensible energy production and conservation. The research supported is focused on the basic mechanism affecting plant productivity, conversion of biomass and other organic materials into fuels and chemicalsmore » by microbial systems, and the ability of biological systems to replace energy-intensive or pollutant-producing processes. The Division also addresses the increasing number of new opportunities arising at the interface of biology with other basic energy-related sciences such as biosynthesis of novel materials and the influence of soil organisms on geological processes. This report gives summaries on 225 projects on photosynthesis, membrane or ion transport, plant metabolism and biosynthesis, carbohydrate metabolism lipid metabolism, plant growth and development, plant genetic regulation and genetic mechanisms, plant cell wall development, lignin-polysaccharide breakdown, nitrogen fixation and plant-microbial symbiosis, mechanism for plant adaptation, fermentative microbial metabolism, one and two carbon microbial metabolism, extremophilic microbes, microbial respiration, nutrition and metal metabolism, and materials biosynthesis.« less
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
Numerical Error Estimation with UQ
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Korn, Peter; Marotzke, Jochem
2014-05-01
Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted
NASA Astrophysics Data System (ADS)
Wójtowicz-Wróbel, Agnieszka
2017-10-01
The goal of this paper is to answer the question about the current importance of structures associated with the thermal processing of waste within the space of Polish cities and what status can they have in the functional and spatial structure of Polish cities in the future. The construction of thermal waste processing plants in Poland is currently a new and important problem, with numerous structures of this type being built due to increasing care for the natural environment, with the introduction of legal regulations, as well as due to the possibility of obtaining large external funding for the purposes of undertaking pro-environmental spatial initiatives, etc. For this reason, the paper contains research on the increase in the number of thermal waste processing plants in Poland in recent years. The abovementioned data was compared with similar information from other European Union member states. In the group containing Polish thermal waste processing plants, research was performed regarding the stage of the construction of a plant (operating plant, plant under construction, design in a construction phase, etc.). The paper also contains a listing of the functions other than the basic form of use, which is the incineration of waste - similarly to numerous foreign examples - that the environmentally friendly waste incineration plants fulfil in Poland, dividing the additional forms of use into "hard" elements (at the design level, requiring the expansion of a building featuring new elements that are not directly associated with the basic purpose of waste processing) and soft (social, educational, promotional actions, as well as other endeavours that require human involvement, but that do not entail significant design work on the buildings itself, expanding its form of use, etc.) as well as mixed activity, which required design work, but on a relatively small scale. Research was also conducted regarding the placement of thermal waste processing plants within the spatial structures of cities (a city’s outer zone, central zone, etc.) and their placement in relation to the more important urban units, in addition to specifying what type of urban structure they are located in. On the basis of the research, we can observe that the construction of environmentally friendly thermal waste processing plants is a valid and new problem in Poland, and the potential that lies in the construction of a new environmentally friendly structure and the possibility of using it to improve the quality of an urban space is often left untapped, bringing the construction of such a structure down to nothing but its technological function. The research can serve as a comparative study for similar experiences in other countries, or for studies related to urban structures and their elements.
Idealized numerical modeling of polar mesocyclones dynamics diagnosed by energy budget
NASA Astrophysics Data System (ADS)
Sergeev, Dennis; Stepanenko, Victor
2014-05-01
Polar mesocyclones (MC) refer to a wide class of mesoscale vortices occuring poleward of the main polar front [1]. Their subtype - polar low - is commonly known for its intensity, that can result in windstorm damage of infrastructure in high latitudes. The observational data sparsity and the small size of polar MCs are major limitations for the clear understanding and numerical prediction of the evolution of these objects. The origin of polar MCs is still a matter of uncertainty, though the recent numerical investigations have exposed a strong dependence of the polar mesocyclone development upon the magnitude of baroclinicity and upon the water vapor concentration in the atmosphere. However, most of the previous studies focused on the individual polar low (the so-called case studies), with too many factors affecting it simultaneously and none of them being dominant in polar MC generation. This study focuses on the early stages of polar MC development within an idealized numerical experiments with mesoscale atmospheric model, where it is possible to look deeper into each single physical process. Our aim is to explain the role of such mechanisms as baroclinic instability or diabatic heating by comparing their contribution to the structure and dynamics of the vortex. The baroclinic instability, as reported by many researchers [2], can be a crucial factor in a MC's life cycle, especially in polar regions. Besides the baroclinic instability several diabatic processes can contribute to the energy generation that fuels a polar mesocyclone. One of the key energy sources in polar regions is surface heat fluxes. The other is the moisture content in the atmosphere that can affect the development of the disturbance by altering the latent heat release. To evaluate the relative importance of the diabatic and baroclinic energy sources for the development of the polar mesocyclone we apply energy diagnostics. In other words, we examine the rate of change of the kinetic energy (that can be interpreted as the growth rate of the vortex) and energy conversion in the diagnostic equations for kinetic and available potential energy (APE). The energy budget equations are implemented in two forms. The first approach follows the scheme developed by Lorenz (1955) in which KE and APE are broken into a mean component and an eddy component forming a well-known energy cycle. The second method is based on the energy equations that are strictly derived from the governing equations of the numerical mesoscale model used. The latter approach, hence, takes into account all the approximations and numerical features used in the model. Some conclusions based on the comparison of the described methods are presented in the study. A series of high-resolution experiments is carried out using three-dimensional non-hydrostatic limited-area sigma-coordinate numerical model ReMeDy (Research Mesoscale Dynamics), being developed at Lomonosov Moscow State University [3]. An idealized basic state condition is used for all simulations. It is composed of the zonally oriented baroclinic zone over the sea surface partly covered with ice. To realize a baroclinic channel environment zero-gradient boundary conditions at the meridional lateral oundaries are imposed, while the zonal boundary conditions are periodic. The initialization of the mesocyclone is achieved by creating a small axis-symmetric vortex in the center of the model domain. The baroclinicity and stratification of the basic state, as well as the surface parameters, are varied in the typically observed range. References 1. Heinemann G, Øyvind S. 2013. Workshop On Polar Lows. Bull. Amer. Meteor. Soc. 94: ES123-ES126. 2. Yanase W, Niino H. 2006. Dependence of Polar Low Development on Baroclinicity and Physical Processes: An Idealized High-Resolution Experiment, J. Atmos. Sci. 64: 3044-3067. 3. Chechin DG et al. 2013. Idealized dry quasi 2-D mesoscale simulations of cold-air outbreaks over the marginal sea ice zone with fine and coarse resolution. J. Geophys. Res. 118: 8787-8813.
Mechanisms of Soil Carbon Sequestration
NASA Astrophysics Data System (ADS)
Lal, Rattan
2015-04-01
Carbon (C) sequestration in soil is one of the several strategies of reducing the net emission of CO2 into the atmosphere. Of the two components, soil organic C (SOC) and soil inorganic C (SIC), SOC is an important control of edaphic properties and processes. In addition to off-setting part of the anthropogenic emissions, enhancing SOC concentration to above the threshold level (~1.5-2.0%) in the root zone has numerous ancillary benefits including food and nutritional security, biodiversity, water quality, among others. Because of its critical importance in human wellbeing and nature conservancy, scientific processes must be sufficiently understood with regards to: i) the potential attainable, and actual sink capacity of SOC and SIC, ii) permanence of the C sequestered its turnover and mean residence time, iii) the amount of biomass C needed (Mg/ha/yr) to maintain and enhance SOC pool, and to create a positive C budget, iv) factors governing the depth distribution of SOC, v) physical, chemical and biological mechanisms affecting the rate of decomposition by biotic and abiotic processes, vi) role of soil aggregation in sequestration and protection of SOC and SIC pool, vii) the importance of root system and its exudates in transfer of biomass-C into the SOC pools, viii) significance of biogenic processes in formation of secondary carbonates, ix) the role of dissolved organic C (DOC) in sequestration of SOC and SIC, and x) importance of weathering of alumino-silicates (e.g., powered olivine) in SIC sequestration. Lack of understanding of these and other basic processes leads to misunderstanding, inconsistencies in interpretation of empirical data, and futile debates. Identification of site-specific management practices is also facilitated by understanding of the basic processes of sequestration of SOC and SIC. Sustainable intensification of agroecosystems -- producing more from less by enhancing the use efficiency and reducing losses of inputs, necessitates thorough understanding of the processes, factors and causes of SOC and SIC dynamics in soils of natural and managed ecosystems.
Space Shuttle astrodynamical constants
NASA Technical Reports Server (NTRS)
Cockrell, B. F.; Williamson, B.
1978-01-01
Basic space shuttle astrodynamic constants are reported for use in mission planning and construction of ground and onboard software input loads. The data included here are provided to facilitate the use of consistent numerical values throughout the project.
Subsurface And Surface Water Flow Interactions
In this chapter we present basic concepts and principles underlying the phenomena of groundwater and surface water interactions. Fundamental equations and analytical and numerical solutions describing stream-aquifer interactions are presented in hillslope and riparian aquifer en...
FORTRAN programming - A self-taught course
NASA Technical Reports Server (NTRS)
Blecher, S.; Butler, R. V.; Horton, M.; Norrod, V.
1971-01-01
Comprehensive programming course begins with numerical systems and basic concepts, proceeds systematically through FORTRAN language elements, and concludes with discussion of programming techniques. Course is suitable either for individual study or for group study on informal basis.
A technique to remove the tensile instability in weakly compressible SPH
NASA Astrophysics Data System (ADS)
Xu, Xiaoyang; Yu, Peng
2018-01-01
When smoothed particle hydrodynamics (SPH) is directly applied for the numerical simulations of transient viscoelastic free surface flows, a numerical problem called tensile instability arises. In this paper, we develop an optimized particle shifting technique to remove the tensile instability in SPH. The basic equations governing free surface flow of an Oldroyd-B fluid are considered, and approximated by an improved SPH scheme. This includes the implementations of the correction of kernel gradient and the introduction of Rusanov flux into the continuity equation. To verify the effectiveness of the optimized particle shifting technique in removing the tensile instability, the impacting drop, the injection molding of a C-shaped cavity, and the extrudate swell, are conducted. The numerical results obtained are compared with those simulated by other numerical methods. A comparison among different numerical techniques (e.g., the artificial stress) to remove the tensile instability is further performed. All numerical results agree well with the available data.
Plasma physics of extreme astrophysical environments.
Uzdensky, Dmitri A; Rightley, Shane
2014-03-01
Among the incredibly diverse variety of astrophysical objects, there are some that are characterized by very extreme physical conditions not encountered anywhere else in the Universe. Of special interest are ultra-magnetized systems that possess magnetic fields exceeding the critical quantum field of about 44 TG. There are basically only two classes of such objects: magnetars, whose magnetic activity is manifested, e.g., via their very short but intense gamma-ray flares, and central engines of supernovae (SNe) and gamma-ray bursts (GRBs)--the most powerful explosions in the modern Universe. Figuring out how these complex systems work necessarily requires understanding various plasma processes, both small-scale kinetic and large-scale magnetohydrodynamic (MHD), that govern their behavior. However, the presence of an ultra-strong magnetic field modifies the underlying basic physics to such a great extent that relying on conventional, classical plasma physics is often not justified. Instead, plasma-physical problems relevant to these extreme astrophysical environments call for constructing relativistic quantum plasma (RQP) physics based on quantum electrodynamics (QED). In this review, after briefly describing the astrophysical systems of interest and identifying some of the key plasma-physical problems important to them, we survey the recent progress in the development of such a theory. We first discuss the ways in which the presence of a super-critical field modifies the properties of vacuum and matter and then outline the basic theoretical framework for describing both non-relativistic and RQPs. We then turn to some specific astrophysical applications of relativistic QED plasma physics relevant to magnetar magnetospheres and to central engines of core-collapse SNe and long GRBs. Specifically, we discuss the propagation of light through a magnetar magnetosphere; large-scale MHD processes driving magnetar activity and responsible for jet launching and propagation in GRBs; energy-transport processes governing the thermodynamics of extreme plasma environments; micro-scale kinetic plasma processes important in the interaction of intense electric currents flowing through a magnetar magnetosphere with the neutron star surface; and magnetic reconnection of ultra-strong magnetic fields. Finally, we point out that future progress in applying RQP physics to real astrophysical problems will require the development of suitable numerical modeling capabilities.
NASA Astrophysics Data System (ADS)
Zakharova, Natalia; Piskovatsky, Nicolay; Gusev, Anatoly
2014-05-01
Development of Informational-Computational Systems (ICS) for data assimilation procedures is one of multidisciplinary problems. To study and solve these problems one needs to apply modern results from different disciplines and recent developments in: mathematical modeling; theory of adjoint equations and optimal control; inverse problems; numerical methods theory; numerical algebra and scientific computing. The above problems are studied in the Institute of Numerical Mathematics of the Russian Academy of Science (INM RAS) in ICS for personal computers. In this work the results on the Special data base development for ICS "INM RAS - Black Sea" are presented. In the presentation the input information for ICS is discussed, some special data processing procedures are described. In this work the results of forecast using ICS "INM RAS - Black Sea" with operational observation data assimilation are presented. This study was supported by the Russian Foundation for Basic Research (project No 13-01-00753) and by Presidium Program of Russian Academy of Sciences (project P-23 "Black sea as an imitational ocean model"). References 1. V.I. Agoshkov, M.V. Assovskii, S.A. Lebedev, Numerical simulation of Black Sea hydrothermodynamics taking into account tide-forming forces. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 5-31. 2. E.I. Parmuzin, V.I. Agoshkov, Numerical solution of the variational assimilation problem for sea surface temperature in the model of the Black Sea dynamics. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 69-94. 3. V.B. Zalesny, N.A. Diansky, V.V. Fomin, S.N. Moshonkin, S.G. Demyshev, Numerical model of the circulation of Black Sea and Sea of Azov. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 95-111. 4. Agoshkov V.I.,Assovsky M.B., Giniatulin S. V., Zakharova N.B., Kuimov G.V., Parmuzin E.I., Fomin V.V. Informational Computational system of variational assimilation of observation data "INM RAS - Black sea"// Ecological safety of coastal and shelf zones and complex use of shelf resources: Collection of scientific works. Issue 26, Volume 2. - National Academy of Sciences of Ukraine, Marine Hydrophysical Institute, Sebastopol, 2012. Pages 352-360. (In russian)
Solar Process Heat Basics | NREL
Process Heat Basics Solar Process Heat Basics Commercial and industrial buildings may use the same , black metal panel mounted on a south-facing wall to absorb the sun's heat. Air passes through the many nonresidential buildings. A typical system includes solar collectors that work along with a pump, heat exchanger
Stability of choice in the honey bee nest-site selection process.
Nevai, Andrew L; Passino, Kevin M; Srinivasan, Parthasarathy
2010-03-07
We introduce a pair of compartment models for the honey bee nest-site selection process that lend themselves to analytic methods. The first model represents a swarm of bees deciding whether a site is viable, and the second characterizes its ability to select between two viable sites. We find that the one-site assessment process has two equilibrium states: a disinterested equilibrium (DE) in which the bees show no interest in the site and an interested equilibrium (IE) in which bees show interest. In analogy with epidemic models, we define basic and absolute recruitment numbers (R(0) and B(0)) as measures of the swarm's sensitivity to dancing by a single bee. If R(0) is less than one then the DE is locally stable, and if B(0) is less than one then it is globally stable. If R(0) is greater than one then the DE is unstable and the IE is stable under realistic conditions. In addition, there exists a critical site quality threshold Q(*) above which the site can attract some interest (at equilibrium) and below which it cannot. We also find the existence of a second critical site quality threshold Q(**) above which the site can attract a quorum (at equilibrium) and below which it cannot. The two-site discrimination process, in which we examine a swarm's ability to simultaneously consider two sites differing in both site quality and discovery time, has a stable DE if and only if both sites' individual basic recruitment numbers are less than one. Numerical experiments are performed to study the influences of site quality on quorum time and the outcome of competition between a lower quality site discovered first and a higher quality site discovered second. 2009 Elsevier Ltd. All rights reserved.
Pina, Violeta; Castillo, Alejandro; Cohen Kadosh, Roi; Fuentes, Luis J.
2015-01-01
Previous studies have suggested that numerical processing relates to mathematical performance, but it seems that such relationship is more evident for intentional than for automatic numerical processing. In the present study we assessed the relationship between the two types of numerical processing and specific mathematical abilities in a sample of 109 children in grades 1–6. Participants were tested in an ample range of mathematical tests and also performed both a numerical and a size comparison task. The results showed that numerical processing related to mathematical performance only when inhibitory control was involved in the comparison tasks. Concretely, we found that intentional numerical processing, as indexed by the numerical distance effect in the numerical comparison task, was related to mathematical reasoning skills only when the task-irrelevant dimension (the physical size) was incongruent; whereas automatic numerical processing, indexed by the congruency effect in the size comparison task, was related to mathematical calculation skills only when digits were separated by small distance. The observed double dissociation highlights the relevance of both intentional and automatic numerical processing in mathematical skills, but when inhibitory control is also involved. PMID:25873909
Interactive Modelling of Salinity Intrusion in the Rhine-Meuse Delta
NASA Astrophysics Data System (ADS)
Baart, F.; Kranenburg, W.; Luijendijk, A.
2015-12-01
In many delta's of the world salinity intrusion imposes limits to fresh water availability. With increasing population and industry, the need for fresh water increases. But also salinity intrusion is expected to increase due to changes in river discharge, sea level and storm characteristics. In the Rhine-Meuse delta salt intrusion is impacted by human activities as well, like deepening of waterways and opening of delta-branches closed earlier. All these developments call for increasing the understanding of the system, but also for means for policy makers, coastal planners and engineers to assess effects of changes and to explore and design measures. In our presentation we present the developments in interactive modelling of salinity intrusion in the Rhine-Meuse delta. In traditional process-based numerical modelling, impacts are investigated by researchers and engineers by following the steps of pre-defining scenario's, running the model and post-processing the results. Interactive modelling lets users adjust simulations while running. Users can for instance change river discharges or bed levels, and can add measures like changes to geometry. The model will take the adjustments into account immediately, and will directly compute the effect. In this way, a tool becomes available with which coastal planners, policy makers and engineers together can develop and evaluate ideas and designs by interacting with the numerical model. When developing interactive numerical engines, one of the challenges is to optimize the exchange of variables as e.g. salt concentration. In our case we exchange variables on a 3D grid every time step. For this, the numerical model adheres to the Basic Model Interface (http://csdms.colorado.edu/wiki), which allows external control and the exchange of variables through pointers while the model is running. In our presentation we further explain our method and show examples of interactive design of salinity intrusion measures in the Rhine-Meuse delta.
NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.
Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C
2011-09-14
An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics
NASA Technical Reports Server (NTRS)
Felici, Helene M.; Drela, Mark
1993-01-01
A new approach based on the coupling of an Eulerian and a Lagrangian solver, aimed at reducing the numerical diffusion errors of standard Eulerian time-marching finite-volume solvers, is presented. The approach is applied to the computation of the secondary flow in two bent pipes and the flow around a 3D wing. Using convective point markers the Lagrangian approach provides a correction of the basic Eulerian solution. The Eulerian flow in turn integrates in time the Lagrangian state-vector. A comparison of coarse and fine grid Eulerian solutions makes it possible to identify numerical diffusion. It is shown that the Eulerian/Lagrangian approach is an effective method for reducing numerical diffusion errors.
NASA Astrophysics Data System (ADS)
Salha, A. A.; Stevens, D. K.
2013-12-01
This study presents numerical application and statistical development of Stream Water Quality Modeling (SWQM) as a tool to investigate, manage, and research the transport and fate of water pollutants in Lower Bear River, Box elder County, Utah. The concerned segment under study is the Bear River starting from Cutler Dam to its confluence with the Malad River (Subbasin HUC 16010204). Water quality problems arise primarily from high phosphorus and total suspended sediment concentrations that were caused by five permitted point source discharges and complex network of canals and ducts of varying sizes and carrying capacities that transport water (for farming and agriculture uses) from Bear River and then back to it. Utah Department of Environmental Quality (DEQ) has designated the entire reach of the Bear River between Cutler Reservoir and Great Salt Lake as impaired. Stream water quality modeling (SWQM) requires specification of an appropriate model structure and process formulation according to nature of study area and purpose of investigation. The current model is i) one dimensional (1D), ii) numerical, iii) unsteady, iv) mechanistic, v) dynamic, and vi) spatial (distributed). The basic principle during the study is using mass balance equations and numerical methods (Fickian advection-dispersion approach) for solving the related partial differential equations. Model error decreases and sensitivity increases as a model becomes more complex, as such: i) uncertainty (in parameters, data input and model structure), and ii) model complexity, will be under investigation. Watershed data (water quality parameters together with stream flow, seasonal variations, surrounding landscape, stream temperature, and points/nonpoint sources) were obtained majorly using the HydroDesktop which is a free and open source GIS enabled desktop application to find, download, visualize, and analyze time series of water and climate data registered with the CUAHSI Hydrologic Information System. Processing, assessment of validity, and distribution of time-series data was explored using the GNU R language (statistical computing and graphics environment). Physical, chemical, and biological processes equations were written in FORTRAN codes (High Performance Fortran) in order to compute and solve their hyperbolic and parabolic complexities. Post analysis of results conducted using GNU R language. High performance computing (HPC) will be introduced to expedite solving complex computational processes using parallel programming. It is expected that the model will assess nonpoint sources and specific point sources data to understand pollutants' causes, transfer, dispersion, and concentration in different locations of Bear River. Investigation the impact of reduction/removal in non-point nutrient loading to Bear River water quality management could be addressed. Keywords: computer modeling; numerical solutions; sensitivity analysis; uncertainty analysis; ecosystem processes; high Performance computing; water quality.
Nonparallel stability of three-dimensional compressible boundary layers. Part 1: Stability analysis
NASA Technical Reports Server (NTRS)
El-Hady, N. M.
1980-01-01
A compressible linear stability theory is presented for nonparallel three-dimensional boundary-layer flows, taking into account the normal velocity component as well as the streamwise and spanwise variations of the basic flow. The method of multiple scales is used to account for the nonparallelism of the basic flow, and equations are derived for the spatial evolution of the disturbance amplitude and wavenumber. The numerical procedure for obtaining the solution of the nonparallel problem is outlined.
Math anxiety, self-efficacy, and ability in British undergraduate nursing students.
McMullan, Miriam; Jones, Ray; Lea, Susan
2012-04-01
Nurses need to be able to make drug calculations competently. In this study, involving 229 second year British nursing students, we explored the influence of mathematics anxiety, self-efficacy, and numerical ability on drug calculation ability and determined which factors would best predict this skill. Strong significant relationships (p < .001) existed between anxiety, self-efficacy, and ability. Students who failed the numerical and/or drug calculation ability tests were more anxious (p < .001) and less confident (p ≤ .002) in performing calculations than those who passed. Numerical ability made the strongest unique contribution in predicting drug calculation ability (beta = 0.50, p < .001) followed by drug calculation self-efficacy (beta = 0.16, p = .04). Early testing is recommended for basic numerical skills. Faculty are advised to refresh students' numerical skills before introducing drug calculations. Copyright © 2012 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.
1989-01-01
In response to the tremendous growth in the development of advanced materials, such as fiber-reinforced plastic (FRP) composite materials, a new numerical method is developed to analyze and predict the time-dependent properties of these materials. Basic concepts in viscoelasticity, laminated composites, and previous viscoelastic numerical methods are presented. A stable numerical method, called the nonlinear differential equation method (NDEM), is developed to calculate the in-plane stresses and strains over any time period for a general laminate constructed from nonlinear viscoelastic orthotropic plies. The method is implemented in an in-plane stress analysis computer program, called VCAP, to demonstrate its usefulness and to verify its accuracy. A number of actual experimental test results performed on Kevlar/epoxy composite laminates are compared to predictions calculated from the numerical method.
Finite-analytic numerical solution of heat transfer in two-dimensional cavity flow
NASA Technical Reports Server (NTRS)
Chen, C.-J.; Naseri-Neshat, H.; Ho, K.-S.
1981-01-01
Heat transfer in cavity flow is numerically analyzed by a new numerical method called the finite-analytic method. The basic idea of the finite-analytic method is the incorporation of local analytic solutions in the numerical solutions of linear or nonlinear partial differential equations. In the present investigation, the local analytic solutions for temperature, stream function, and vorticity distributions are derived. When the local analytic solution is evaluated at a given nodal point, it gives an algebraic relationship between a nodal value in a subregion and its neighboring nodal points. A system of algebraic equations is solved to provide the numerical solution of the problem. The finite-analytic method is used to solve heat transfer in the cavity flow at high Reynolds number (1000) for Prandtl numbers of 0.1, 1, and 10.
Analysis of rural public transit in Alabama.
DOT National Transportation Integrated Search
2013-05-01
As rural America continues to age, access to basic necessities and health care will continue to strain rural transit providers. The state of Alabama has numerous Rural Public Transportation Providers, and while every provider is unique, each ca...
Extending HPF for advanced data parallel applications
NASA Technical Reports Server (NTRS)
Chapman, Barbara; Mehrotra, Piyush; Zima, Hans
1994-01-01
The stated goal of High Performance Fortran (HPF) was to 'address the problems of writing data parallel programs where the distribution of data affects performance'. After examining the current version of the language we are led to the conclusion that HPF has not fully achieved this goal. While the basic distribution functions offered by the language - regular block, cyclic, and block cyclic distributions - can support regular numerical algorithms, advanced applications such as particle-in-cell codes or unstructured mesh solvers cannot be expressed adequately. We believe that this is a major weakness of HPF, significantly reducing its chances of becoming accepted in the numeric community. The paper discusses the data distribution and alignment issues in detail, points out some flaws in the basic language, and outlines possible future paths of development. Furthermore, we briefly deal with the issue of task parallelism and its integration with the data parallel paradigm of HPF.
Mathematical model for transmission of tuberculosis in badger population with vaccination
NASA Astrophysics Data System (ADS)
Tasmi, Aldila, D.; Soewono, E.; Nuraini, N.
2016-04-01
Badger was first time identified as a carrier of Bovine tuberculosis disease in England since 30 years ago. Bovine tuberculosis can be transmitted to another species through the faces, saliva, and breath. The control of tuberculosis in the badger is necessary to reduce the spread of the disease to other species. Many actions have been taken by the government to tackle the disease such as culling badgers with cyanide gas, but this way destroys the natural balance and disrupts the badger population. An alternative way to eliminate tuberculosis within badger population is by vaccination. Here in this paper a model for transmission of badger tuberculosis with vaccination is discussed. The existence of the endemic equilibrium, the stability and the basic reproduction ratio are shown analytically. Numerical simulations show that with proper vaccination level, the basic reproduction ratio could be reduced significantly. Sensitivity analysis for variation of parameters are shown numerically.
Basic Functional Capabilities for a Military Message Processing Service
1974-09-01
AD-AiI1 166 BASIC FUNCTIONA’. CAPABILITIES FOR A MILITARY MESSAGE PROCESSING SERVICE Ronald Tugender, et al University of Southern California...Itte) S. TYPE OF REPORT & PERIOD COVERED BASIC FUNCTIONAL CAPABILITIES FOR A Research Report MILITARY MESSAGE PROCESSING SERVICE 6. PERFORMING ONG...WOROD (Conionwo m trevre aide If tneeoooy arm idmentify by egekA INber) automated message processing , command and control, writer-to-reader service
ERIC Educational Resources Information Center
Quann, Steve; Satin, Diana
This textbook leads high-beginning and intermediate English-as-a-Second-Language (ESL) students through cooperative computer-based activities that combine language learning with training in basic computer skills and word processing. Each unit concentrates on a basic concept of word processing while also focusing on a grammar topic. Skills are…
Discussion: Numerical study on the entrainment of bed material into rapid landslides
Iverson, Richard M.
2013-01-01
A paper recently published in this journal (Pirulli & Pastor, 2012) uses numerical modelling to study the important problem of entrainment of bed material by landslides. Unfortunately, some of the basic equations employed in the study are flawed, because they violate the principle of linear momentum conservation. Similar errors exist in some other studies of entrainment, and the errors appear to stem from confusion about the role of bed-sediment inertia in differing frames of reference.
A delta-rule model of numerical and non-numerical order processing.
Verguts, Tom; Van Opstal, Filip
2014-06-01
Numerical and non-numerical order processing share empirical characteristics (distance effect and semantic congruity), but there are also important differences (in size effect and end effect). At the same time, models and theories of numerical and non-numerical order processing developed largely separately. Currently, we combine insights from 2 earlier models to integrate them in a common framework. We argue that the same learning principle underlies numerical and non-numerical orders, but that environmental features determine the empirical differences. Implications for current theories on order processing are pointed out. PsycINFO Database Record (c) 2014 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Faes, Luca; Nollo, Giandomenico; Stramaglia, Sebastiano; Marinazzo, Daniele
2017-10-01
In the study of complex physical and biological systems represented by multivariate stochastic processes, an issue of great relevance is the description of the system dynamics spanning multiple temporal scales. While methods to assess the dynamic complexity of individual processes at different time scales are well established, multiscale analysis of directed interactions has never been formalized theoretically, and empirical evaluations are complicated by practical issues such as filtering and downsampling. Here we extend the very popular measure of Granger causality (GC), a prominent tool for assessing directed lagged interactions between joint processes, to quantify information transfer across multiple time scales. We show that the multiscale processing of a vector autoregressive (AR) process introduces a moving average (MA) component, and describe how to represent the resulting ARMA process using state space (SS) models and to combine the SS model parameters for computing exact GC values at arbitrarily large time scales. We exploit the theoretical formulation to identify peculiar features of multiscale GC in basic AR processes, and demonstrate with numerical simulations the much larger estimation accuracy of the SS approach compared to pure AR modeling of filtered and downsampled data. The improved computational reliability is exploited to disclose meaningful multiscale patterns of information transfer between global temperature and carbon dioxide concentration time series, both in paleoclimate and in recent years.
Cognitive analysis as a way to understand students' problem-solving process in BODMAS rule
NASA Astrophysics Data System (ADS)
Ung, Ting Su; Kiong, Paul Lau Ngee; Manaf, Badron bin; Hamdan, Anniza Binti; Khium, Chen Chee
2017-04-01
Students tend to make lots of careless mistake during the process of mathematics solving. To facilitate effective learning, educators have to understand which cognitive processes are used by students and how these processes help them to solve problems. This paper is only aimed to determine the common errors in mathematics by pre-diploma students that took Intensive Mathematics I (MAT037) in UiTM Sarawak. Then, concentrate on the errors did by the students on the topic of BODMAS rule and the mental processes corresponding to these errors that been developed by students. One class of pre-diploma students taking MAT037 taught by the researchers was selected because they performed poorly in SPM mathematics. It is inevitable that they finished secondary education with many misconceptions in mathematics. The solution scripts for all the tutorials of the participants were collected. This study was predominately qualitative and the solution scripts were content analyzed to identify the common errors committed by the participants, and to generate possible mental processes to these errors. Selected students were interviewed by the researchers during the progress. BODMAS rule could be further divided into Numerical Simplification and Powers Simplification. Furthermore, the erroneous processes could be attributed to categories of Basic Arithmetic Rules, Negative Numbers and Powers.
NASA Astrophysics Data System (ADS)
Endelt, B.
2017-09-01
Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.
Nonlinear dynamic processes in modified ionospheric plasma
NASA Astrophysics Data System (ADS)
Kochetov, A.; Terina, G.
Presented work is a contribution to the experimental and theoretical study of nonlinear effects arising on ionospheric plasma under the action of powerful radio emission (G.I. Terina, J. Atm. Terr. Phys., 1995, v.57, p.273; A.V. Kochetov et. al., Advances in Space Research, 2002, in press). The experimental results were obtained by the method of sounding of artificially disturbed ionosphere by short radio pulses. The amplitude and phase characteristics of scattered signal as of "caviton" type (CS) (analogy of narrow-band component of stimulation electromagnetic emission (SEE)) as the main signal (MS) of probing transmitter are considered. The theoretical model is based on numerical solution of driven nonlinear Shrödinger equation (NSE) in inhomogeneous plasma. The simulation allows us to study a self-consistent spatial-temporal dynamics of field and plasma. The observed evolution of phase characteristics of MS and CS qualitatively correspond to the results of numerical simulation and demonstrate the penetration processes of powerful electromagnetic wave in supercritical (in linear approach) plasma regions. The modeling results explain also the periodic generation of CS, the travel CS maximum down to density gradient, the aftereffect of CS. The obtained results show the excitation of strong turbulence and allow us to interpret CS, NC and so far inexplicable phenomena as "spikes" too. The work was supported in part by Russian Foundation for Basic Research (grants Nos. 99-02-16642, 99-02- 16399).
Reduced order models for prediction of groundwater quality impacts from CO₂ and brine leakage
Zheng, Liange; Carroll, Susan; Bianchi, Marco; ...
2014-12-31
A careful assessment of the risk associated with geologic CO₂ storage is critical to the deployment of large-scale storage projects. A potential risk is the deterioration of groundwater quality caused by the leakage of CO₂ and brine leakage from deep subsurface reservoirs. In probabilistic risk assessment studies, numerical modeling is the primary tool employed to assess risk. However, the application of traditional numerical models to fully evaluate the impact of CO₂ leakage on groundwater can be computationally complex, demanding large processing times and resources, and involving large uncertainties. As an alternative, reduced order models (ROMs) can be used as highlymore » efficient surrogates for the complex process-based numerical models. In this study, we represent the complex hydrogeological and geochemical conditions in a heterogeneous aquifer and subsequent risk by developing and using two separate ROMs. The first ROM is derived from a model that accounts for the heterogeneous flow and transport conditions in the presence of complex leakage functions for CO₂ and brine. The second ROM is obtained from models that feature similar, but simplified flow and transport conditions, and allow for a more complex representation of all relevant geochemical reactions. To quantify possible impacts to groundwater aquifers, the basic risk metric is taken as the aquifer volume in which the water quality of the aquifer may be affected by an underlying CO₂ storage project. The integration of the two ROMs provides an estimate of the impacted aquifer volume taking into account uncertainties in flow, transport and chemical conditions. These two ROMs can be linked in a comprehensive system level model for quantitative risk assessment of the deep storage reservoir, wellbore leakage, and shallow aquifer impacts to assess the collective risk of CO₂ storage projects.« less
Fire Suppression in Low Gravity Using a Cup Burner
NASA Technical Reports Server (NTRS)
Takahashi, Fumiaki; Linteris, Gregory T.; Katta, Viswanath R.
2004-01-01
Longer duration missions to the moon, to Mars, and on the International Space Station increase the likelihood of accidental fires. The goal of the present investigation is to: (1) understand the physical and chemical processes of fire suppression in various gravity and O2 levels simulating spacecraft, Mars, and moon missions; (2) provide rigorous testing of numerical models, which include detailed combustion suppression chemistry and radiation sub-models; and (3) provide basic research results useful for advances in space fire safety technology, including new fire-extinguishing agents and approaches. The structure and extinguishment of enclosed, laminar, methane-air co-flow diffusion flames formed on a cup burner have been studied experimentally and numerically using various fire-extinguishing agents (CO2, N2, He, Ar, CF3H, and Fe(CO)5). The experiments involve both 1g laboratory testing and low-g testing (in drop towers and the KC-135 aircraft). The computation uses a direct numerical simulation with detailed chemistry and radiative heat-loss models. An agent was introduced into a low-speed coflowing oxidizing stream until extinguishment occurred under a fixed minimal fuel velocity, and thus, the extinguishing agent concentrations were determined. The extinguishment of cup-burner flames, which resemble real fires, occurred via a blowoff process (in which the flame base drifted downstream) rather than the global extinction phenomenon typical of counterflow diffusion flames. The computation revealed that the peak reactivity spot (the reaction kernel) formed in the flame base was responsible for attachment and blowoff of the trailing diffusion flame. Furthermore, the buoyancy-induced flame flickering in 1g and thermal and transport properties of the agents affected the flame extinguishment limits.
Fire Suppression in Low Gravity Using a Cup Burner
NASA Technical Reports Server (NTRS)
Takahashi, Fumiaki; Linteris, Gregory T.; Katta, Viswanath R.
2004-01-01
Longer duration missions to the moon, to Mars, and on the International Space Station increase the likelihood of accidental fires. The goal of the present investigation is to: (1) understand the physical and chemical processes of fire suppression in various gravity and O2 levels simulating spacecraft, Mars, and moon missions; (2) provide rigorous testing of numerical models, which include detailed combustion-suppression chemistry and radiation sub-models; and (3) provide basic research results useful for advances in space fire safety technology, including new fire-extinguishing agents and approaches.The structure and extinguishment of enclosed, laminar, methane-air co-flow diffusion flames formed on a cup burner have been studied experimentally and numerically using various fire-extinguishing agents (CO2, N2, He, Ar, CF3H, and Fe(CO)5). The experiments involve both 1g laboratory testing and low-g testing (in drop towers and the KC-135 aircraft). The computation uses a direct numerical simulation with detailed chemistry and radiative heat-loss models. An agent was introduced into a low-speed coflowing oxidizing stream until extinguishment occurred under a fixed minimal fuel velocity, and thus, the extinguishing agent concentrations were determined. The extinguishment of cup-burner flames, which resemble real fires, occurred via a blowoff process (in which the flame base drifted downstream) rather than the global extinction phenomenon typical of counterflow diffusion flames. The computation revealed that the peak reactivity spot (the reaction kernel) formed in the flame base was responsible for attachment and blowoff of the trailing diffusion flame. Furthermore, the buoyancy-induced flame flickering in 1g and thermal and transport properties of the agents affected the flame extinguishment limits.
What is the philosophy of modelling soil moisture movement?
NASA Astrophysics Data System (ADS)
Chen, J.; Wu, Y.
2009-12-01
In laboratory, the soil moisture movement in the different soil textures has been analysed. From field investigation, at a spot, the soil moisture movement in the root zone, vadose zone and shallow aquifer has been explored. In addition, on ground slopes, the interflow in the near surface soil layers has been studied. Along the regions near river reaches, the expansion and shrink of the saturated area due to rainfall occurrences have been observed. From those previous explorations regarding soil moisture movement, numerical models to represent this hydrologic process have been developed. However, generally, due to high heterogeneity and stratification of soil in a basin, modelling soil moisture movement is rather challenging. Normally, some empirical equations or artificial manipulation are employed to adjust the soil moisture movement in various numerical models. In this study, we inspect the soil moisture movement equations used in a watershed model, SWAT (Soil and Water Assessment Tool) (Neitsch et al., 2005), to examine the limitations of our knowledge in such a hydrologic process. Then, we adopt the features of a topographic-information based on a hydrologic model, TOPMODEL (Beven and Kirkby, 1979), to enhance the representation of soil moisture movement in SWAT. Basically, the results of the study reveal, to some extent, the philosophy of modelling soil moisture movement in numerical models, which will be presented in the conference. Beven, K.J. and Kirkby, M.J., 1979. A physically based variable contributing area model of basin hydrology. Hydrol. Science Bulletin, 24: 43-69. Neitsch, S.L., Arnold, J.G., Kiniry, J.R., Williams, J.R. and King, K.W., 2005. Soil and Water Assessment Tool Theoretical Documentation, Grassland, soil and research service, Temple, TX.
Li, Shu; Du, Xue-Lei; Li, Qi; Xuan, Yan-Hua; Wang, Yun; Rao, Li-Lin
2016-01-01
Two kinds of probability expressions, verbal and numerical, have been used to characterize the uncertainty that people face. However, the question of whether verbal and numerical probabilities are cognitively processed in a similar manner remains unresolved. From a levels-of-processing perspective, verbal and numerical probabilities may be processed differently during early sensory processing but similarly in later semantic-associated operations. This event-related potential (ERP) study investigated the neural processing of verbal and numerical probabilities in risky choices. The results showed that verbal probability and numerical probability elicited different N1 amplitudes but that verbal and numerical probabilities elicited similar N2 and P3 waveforms in response to different levels of probability (high to low). These results were consistent with a levels-of-processing framework and suggest some internal consistency between the cognitive processing of verbal and numerical probabilities in risky choices. Our findings shed light on possible mechanism underlying probability expression and may provide the neural evidence to support the translation of verbal to numerical probabilities (or vice versa). PMID:26834612
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keren, Y.; Bemporad, G.A.; Rubin, H.
This paper concerns an experimental evaluation of the basic aspects of operation of the advanced solar pond (ASP). Experiments wee carried out in a laboratory test section in order to assess the feasibility of the density gradient maintenance in stratified flowing layers. The density stratification was caused by a non uniform distribution of temperatures in the flow field. Results of the experiments are reported and analyzed in the paper. Experimental data were used in order to calibrate the numerical model able to simulate heat and momentum transfer in the ASP. The numerical results confirmed the validity of the numerical modelmore » adopted, and proved the latter applicability for the simulation of the ASP performance.« less
Advances in numerical and applied mathematics
NASA Technical Reports Server (NTRS)
South, J. C., Jr. (Editor); Hussaini, M. Y. (Editor)
1986-01-01
This collection of papers covers some recent developments in numerical analysis and computational fluid dynamics. Some of these studies are of a fundamental nature. They address basic issues such as intermediate boundary conditions for approximate factorization schemes, existence and uniqueness of steady states for time dependent problems, and pitfalls of implicit time stepping. The other studies deal with modern numerical methods such as total variation diminishing schemes, higher order variants of vortex and particle methods, spectral multidomain techniques, and front tracking techniques. There is also a paper on adaptive grids. The fluid dynamics papers treat the classical problems of imcompressible flows in helically coiled pipes, vortex breakdown, and transonic flows.
Cumulative reports and publications through December 31, 1989
NASA Technical Reports Server (NTRS)
1990-01-01
A complete list of reports from the Institute for Computer Applications in Science and Engineering (ICASE) is presented. The major categories of the current ICASE research program are: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effectual numerical methods; computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, structural analysis, and chemistry; computer systems and software, especially vector and parallel computers, microcomputers, and data management. Since ICASE reports are intended to be preprints of articles that will appear in journals or conference proceedings, the published reference is included when it is available.
Study on the tumor-induced angiogenesis using mathematical models.
Suzuki, Takashi; Minerva, Dhisa; Nishiyama, Koichi; Koshikawa, Naohiko; Chaplain, Mark Andrew Joseph
2018-01-01
We studied angiogenesis using mathematical models describing the dynamics of tip cells. We reviewed the basic ideas of angiogenesis models and its numerical simulation technique to produce realistic computer graphics images of sprouting angiogenesis. We examined the classical model of Anderson-Chaplain using fundamental concepts of mass transport and chemical reaction with ECM degradation included. We then constructed two types of numerical schemes, model-faithful and model-driven ones, where new techniques of numerical simulation are introduced, such as transient probability, particle velocity, and Boolean variables. © 2017 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.
Sleep and Nutritional Deprivation and Performance of House Officers.
ERIC Educational Resources Information Center
Hawkins, Michael R.; And Others
1985-01-01
A study to compare cognitive functioning in acutely and chronically sleep-deprived house officers is described. A multivariate analysis of variance revealed significant deficits in primary mental tasks involving basic rote memory, language, and numeric skills. (Author/MLW)
The origins of computer weather prediction and climate modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lynch, Peter
2008-03-20
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. Amore » fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.« less
The origins of computer weather prediction and climate modeling
NASA Astrophysics Data System (ADS)
Lynch, Peter
2008-03-01
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.
A combined study of heat and mass transfer in an infant incubator with an overhead screen.
Ginalski, Maciej K; Nowak, Andrzej J; Wrobel, Luiz C
2007-06-01
The main objective of this study is to investigate the major physical processes taking place inside an infant incubator, before and after modifications have been made to its interior chamber. The modification involves the addition of an overhead screen to decrease radiation heat losses from the infant placed inside the incubator. The present study investigates the effect of these modifications on the convective heat flux from the infant's body to the surrounding environment inside the incubator. A combined analysis of airflow and heat transfer due to conduction, convection, radiation and evaporation has been performed, in order to calculate the temperature and velocity fields inside the incubator before and after the design modification. Due to the geometrical complexity of the model, computer-aided design (CAD) applications were used to generate a computer-based model. All numerical calculations have been performed using the commercial computational fluid dynamics (CFD) package FLUENT, together with in-house routines used for managing purposes and user-defined functions (UDFs) which extend the basic solver capabilities. Numerical calculations have been performed for three different air inlet temperatures: 32, 34 and 36 degrees C. The study shows a decrease of the radiative and convective heat losses when the overhead screen is present. The results obtained were numerically verified as well as compared with results available in the literature from investigations of dry heat losses from infant manikins.
How Numeracy Influences Risk Comprehension and Medical Decision Making
Reyna, Valerie F.; Nelson, Wendy L.; Han, Paul K.; Dieckmann, Nathan F.
2009-01-01
We review the growing literature on health numeracy, the ability to understand and use numerical information, and its relation to cognition, health behaviors, and medical outcomes. Despite the surfeit of health information from commercial and noncommercial sources, national and international surveys show that many people lack basic numerical skills that are essential to maintain their health and make informed medical decisions. Low numeracy distorts perceptions of risks and benefits of screening, reduces medication compliance, impedes access to treatments, impairs risk communication (limiting prevention efforts among the most vulnerable), and, based on the scant research conducted on outcomes, appears to adversely affect medical outcomes. Low numeracy is also associated with greater susceptibility to extraneous factors (i.e., factors that do not change the objective numerical information). That is, low numeracy increases susceptibility to effects of mood or how information is presented (e.g., as frequencies vs. percentages) and to biases in judgment and decision making (e.g., framing and ratio bias effects). Much of this research is not grounded in empirically supported theories of numeracy or mathematical cognition, which are crucial for designing evidence-based policies and interventions that are effective in reducing risk and improving medical decision making. To address this gap, we outline four theoretical approaches (psychophysical, computational, standard dual-process, and fuzzy trace theory), review their implications for numeracy, and point to avenues for future research. PMID:19883143
47 CFR 69.119 - Basic service element expedited approval process.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Basic service element expedited approval process. 69.119 Section 69.119 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges § 69.119 Basic service element...
NASA Astrophysics Data System (ADS)
Wuttke, M. W.; Kessels, W.; Wessling, S.; Han, J.
2007-05-01
Spontaneous combustion is a world wide problem for technical operations in mining, waste disposal and power plant facilities. The principle driving the combustion is every where the same independent of the different reactive materials: Fresh air with the common oxygen content is getting in contact with the reactive material by human operations. The following reaction process produces heat at a usually low but constant rate. The reactive material in operating or abandoned coal mines, heaps of coal, waste or reactive minerals is most times strongly broken or fractured, such that the atmospheric oxygen can deeply penetrate into the porous or fractured media. Because the strongly broken or fractured medium with air filled pores and fractures is often combined with a low thermal conductivity of the bulk material the produced heat accumulates and the temperature increases with time. If the reactivity strongly increases with temperature, the temperature rise accelerates up to the "combustion temperature". Once the temperature is high enough the combustion process is determined by the oxygen transport to the combustion center rather than the chemical reactivity. Spontaneous combustion is thus a self- amplifying process where an initial small variation in the parameters and the starting conditions can create exploding combustion hot spots in an apparently homogenous material. The phenomenon will be discussed by various examples in the context of the German - Sino coal fire project. A temperature monitoring in hot fracture systems documents the strong influence of the weather conditions on the combustion process. Numerical calculations show the sensitivity of the combustion to the model geometries, the boundary conditions and mainly the permeability. The most used fire fighting operations like covering and water injection are discussed. A new method of using saltwater for fire fighting is presented and discussed. References: Kessels, W., Wessling, S., Li, X., and Wuttke, M. W. Numerical element distinction for reactive transport modeling regarding reaction rate. In Proceedings of MODFLOW and MORE 2006: Managing Groundwater Systems, May 21 - 24, 2006, Golden, CO USA (2006). Kessels, W., Wuttke, M. W., Wessling, S., and Li, X. Coal fires between self ignition and fire fighting: Numerical modeling and basic geophysical measurements. In ERSEC Ecological Book Series - 4 on Coal Fire Research (2007). Wessling, S., Litschke, T., Wiegand, J., Schlömer, S., and Kessels, W. Simulating dynamic subsurface coal fires and its applications. In ERSEC Ecological Book Series - 4 on Coal Fire Reserach (2007). Wessling, S., Kessels, W., Schmidt, M., and Krause, U. Investigating dynamic underground coal fires by means of numerical simulation. Geophys. J. Int. (submitted).
The bioelectric code: An ancient computational medium for dynamic control of growth and form.
Levin, Michael; Martyniuk, Christopher J
2018-02-01
What determines large-scale anatomy? DNA does not directly specify geometrical arrangements of tissues and organs, and a process of encoding and decoding for morphogenesis is required. Moreover, many species can regenerate and remodel their structure despite drastic injury. The ability to obtain the correct target morphology from a diversity of initial conditions reveals that the morphogenetic code implements a rich system of pattern-homeostatic processes. Here, we describe an important mechanism by which cellular networks implement pattern regulation and plasticity: bioelectricity. All cells, not only nerves and muscles, produce and sense electrical signals; in vivo, these processes form bioelectric circuits that harness individual cell behaviors toward specific anatomical endpoints. We review emerging progress in reading and re-writing anatomical information encoded in bioelectrical states, and discuss the approaches to this problem from the perspectives of information theory, dynamical systems, and computational neuroscience. Cracking the bioelectric code will enable much-improved control over biological patterning, advancing basic evolutionary developmental biology as well as enabling numerous applications in regenerative medicine and synthetic bioengineering. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Polanský, Jiří; Kalmár, László; Gášpár, Roman
2013-12-01
The main aim of this paper is determine the centrifugal fan with forward curved blades aerodynamic characteristics based on numerical modeling. Three variants of geometry were investigated. The first, basic "A" variant contains 12 blades. The geometry of second "B" variant contains 12 blades and 12 semi-blades with optimal length [1]. The third, control variant "C" contains 24 blades without semi-blades. Numerical calculations were performed by CFD Ansys. Another aim of this paper is to compare results of the numerical simulation with results of approximate numerical procedure. Applied approximate numerical procedure [2] is designated to determine characteristics of the turbulent flow in the bladed space of a centrifugal-flow fan impeller. This numerical method is an extension of the hydro-dynamical cascade theory for incompressible and inviscid fluid flow. Paper also partially compares results from the numerical simulation and results from the experimental investigation. Acoustic phenomena observed during experiment, during numerical simulation manifested as deterioration of the calculation stability, residuals oscillation and thus also as a flow field oscillation. Pressure pulsations are evaluated by using frequency analysis for each variant and working condition.
Simulating propagation of coherent light in random media using the Fredholm type integral equation
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2017-06-01
Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.
Madzak, Catherine
2018-06-25
Yarrowia lipolytica is an oleaginous saccharomycetous yeast with a long history of industrial use. It aroused interest several decades ago as host for heterologous protein production. Thanks to the development of numerous molecular and genetic tools, Y. lipolytica is now a recognized system for expressing heterologous genes and secreting the corresponding proteins of interest. As genomic and transcriptomic tools increased our basic knowledge on this yeast, we can now envision engineering its metabolic pathways for use as whole-cell factory in various bioconversion processes. Y. lipolytica is currently being developed as a workhorse for biotechnology, notably for single-cell oil production and upgrading of industrial wastes into valuable products. As it becomes more and more difficult to keep up with an ever-increasing literature on Y. lipolytica engineering technology, this article aims to provide basic and actualized knowledge on this research area. The most useful reviews on Y. lipolytica biology, use, and safety will be evoked, together with a resume of the engineering tools available in this yeast. This mini-review will then focus on recently developed tools and engineering strategies, with a particular emphasis on promoter tuning, metabolic pathways assembly, and genome editing technologies.
Darquenne, Chantal; Fleming, John S; Katz, Ira; Martin, Andrew R; Schroeter, Jeffry; Usmani, Omar S; Venegas, Jose; Schmid, Otmar
2016-04-01
Development of a new drug for the treatment of lung disease is a complex and time consuming process involving numerous disciplines of basic and applied sciences. During the 2015 Congress of the International Society for Aerosols in Medicine, a group of experts including aerosol scientists, physiologists, modelers, imagers, and clinicians participated in a workshop aiming at bridging the gap between basic research and clinical efficacy of inhaled drugs. This publication summarizes the current consensus on the topic. It begins with a short description of basic concepts of aerosol transport and a discussion on targeting strategies of inhaled aerosols to the lungs. It is followed by a description of both computational and biological lung models, and the use of imaging techniques to determine aerosol deposition distribution (ADD) in the lung. Finally, the importance of ADD to clinical efficacy is discussed. Several gaps were identified between basic science and clinical efficacy. One gap between scientific research aimed at predicting, controlling, and measuring ADD and the clinical use of inhaled aerosols is the considerable challenge of obtaining, in a single study, accurate information describing the optimal lung regions to be targeted, the effectiveness of targeting determined from ADD, and some measure of the drug's effectiveness. Other identified gaps were the language and methodology barriers that exist among disciplines, along with the significant regulatory hurdles that need to be overcome for novel drugs and/or therapies to reach the marketplace and benefit the patient. Despite these gaps, much progress has been made in recent years to improve clinical efficacy of inhaled drugs. Also, the recent efforts by many funding agencies and industry to support multidisciplinary networks including basic science researchers, R&D scientists, and clinicians will go a long way to further reduce the gap between science and clinical efficacy.
Fleming, John S.; Katz, Ira; Martin, Andrew R.; Schroeter, Jeffry; Usmani, Omar S.; Venegas, Jose
2016-01-01
Abstract Development of a new drug for the treatment of lung disease is a complex and time consuming process involving numerous disciplines of basic and applied sciences. During the 2015 Congress of the International Society for Aerosols in Medicine, a group of experts including aerosol scientists, physiologists, modelers, imagers, and clinicians participated in a workshop aiming at bridging the gap between basic research and clinical efficacy of inhaled drugs. This publication summarizes the current consensus on the topic. It begins with a short description of basic concepts of aerosol transport and a discussion on targeting strategies of inhaled aerosols to the lungs. It is followed by a description of both computational and biological lung models, and the use of imaging techniques to determine aerosol deposition distribution (ADD) in the lung. Finally, the importance of ADD to clinical efficacy is discussed. Several gaps were identified between basic science and clinical efficacy. One gap between scientific research aimed at predicting, controlling, and measuring ADD and the clinical use of inhaled aerosols is the considerable challenge of obtaining, in a single study, accurate information describing the optimal lung regions to be targeted, the effectiveness of targeting determined from ADD, and some measure of the drug's effectiveness. Other identified gaps were the language and methodology barriers that exist among disciplines, along with the significant regulatory hurdles that need to be overcome for novel drugs and/or therapies to reach the marketplace and benefit the patient. Despite these gaps, much progress has been made in recent years to improve clinical efficacy of inhaled drugs. Also, the recent efforts by many funding agencies and industry to support multidisciplinary networks including basic science researchers, R&D scientists, and clinicians will go a long way to further reduce the gap between science and clinical efficacy. PMID:26829187
The Nonlinear Dynamic Response of an Elastic-Plastic Thin Plate under Impulsive Loading,
1987-06-11
Among those numerical methods, the finite element method is the most effective one. The method presented in this paper is an " influence function " numerical...computational time is much less than the finite element method. Its precision is higher also. II. Basic Assumption and the Influence Function of a Simple...calculation. Fig. 1 3 2. The Influence function of a Simple Supported Plate The motion differential equation of a thin plate can be written as DV’w+ _.eluq() (1
Artificial neural network in cosmic landscape
NASA Astrophysics Data System (ADS)
Liu, Junyu
2017-12-01
In this paper we propose that artificial neural network, the basis of machine learning, is useful to generate the inflationary landscape from a cosmological point of view. Traditional numerical simulations of a global cosmic landscape typically need an exponential complexity when the number of fields is large. However, a basic application of artificial neural network could solve the problem based on the universal approximation theorem of the multilayer perceptron. A toy model in inflation with multiple light fields is investigated numerically as an example of such an application.
Computer modeling of test particle acceleration at oblique shocks
NASA Technical Reports Server (NTRS)
Decker, Robert B.
1988-01-01
The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.
Biswas, Mithun; Islam, Rafiqul; Shom, Gautam Kumar; Shopon, Md; Mohammed, Nabeel; Momen, Sifat; Abedin, Anowarul
2017-06-01
BanglaLekha-Isolated, a Bangla handwritten isolated character dataset is presented in this article. This dataset contains 84 different characters comprising of 50 Bangla basic characters, 10 Bangla numerals and 24 selected compound characters. 2000 handwriting samples for each of the 84 characters were collected, digitized and pre-processed. After discarding mistakes and scribbles, 1,66,105 handwritten character images were included in the final dataset. The dataset also includes labels indicating the age and the gender of the subjects from whom the samples were collected. This dataset could be used not only for optical handwriting recognition research but also to explore the influence of gender and age on handwriting. The dataset is publicly available at https://data.mendeley.com/datasets/hf6sf8zrkc/2.
Memcapacitor model and its application in chaotic oscillator with memristor.
Wang, Guangyi; Zang, Shouchi; Wang, Xiaoyuan; Yuan, Fang; Iu, Herbert Ho-Ching
2017-01-01
Memristors and memcapacitors are two new nonlinear elements with memory. In this paper, we present a Hewlett-Packard memristor model and a charge-controlled memcapacitor model and design a new chaotic oscillator based on the two models for exploring the characteristics of memristors and memcapacitors in nonlinear circuits. Furthermore, many basic dynamical behaviors of the oscillator, including equilibrium sets, Lyapunov exponent spectrums, and bifurcations with various circuit parameters, are investigated theoretically and numerically. Our analysis results show that the proposed oscillator possesses complex dynamics such as an infinite number of equilibria, coexistence oscillation, and multi-stability. Finally, a discrete model of the chaotic oscillator is given and the main statistical properties of this oscillator are verified via Digital Signal Processing chip experiments and National Institute of Standards and Technology tests.
The fractional dynamics of quantum systems
NASA Astrophysics Data System (ADS)
Lu, Longzhao; Yu, Xiangyang
2018-05-01
The fractional dynamic process of a quantum system is a novel and complicated problem. The establishment of a fractional dynamic model is a significant attempt that is expected to reveal the mechanism of fractional quantum system. In this paper, a generalized time fractional Schrödinger equation is proposed. To study the fractional dynamics of quantum systems, we take the two-level system as an example and derive the time fractional equations of motion. The basic properties of the system are investigated by solving this set of equations in the absence of light field analytically. Then, when the system is subject to the light field, the equations are solved numerically. It shows that the two-level system described by the time fractional Schrödinger equation we proposed is a confirmable system.
Medical students' note-taking in a medical biochemistry course: an initial exploration.
Morrison, Elizabeth H; McLaughlin, Calvin; Rucker, Lloyd
2002-04-01
Beginning medical students spend numerous hours every week attending basic science lectures and taking notes. Medical faculty often wonder whether they should give students pre-printed instructors' notes before lectures. Proponents of this strategy argue that provided notes enhance learning by facilitating the accurate transmission of information, while opponents counter that provided notes inhibit students' cognitive processing or even discourage students from attending lectures. Little if any research has directly addressed medical students' note-taking or the value of providing instructors' notes. The educational literature does suggest that taking lecture notes enhances university students' learning. University students perform best on post-lecture testing if they review a combination of provided notes and their own personal notes, particularly if the provided notes follow a 'skeletal' format that encourages active note-taking.
U.S. Geological Survey programs and investigations related to soil and water conservation
Osterkamp, W.R.; Gray, J.R.
2001-01-01
The U.S. Geological Survey has a rich tradition of collecting hydrologic data, especially for fluxes of water and suspended sediment, that provide a foundation for studies of soil and water conservation. Applied and basic research has included investigations of the effects of land use on rangelands, croplands, and forests; hazards mapping; derivation of flood and drought frequency, and other statistics related to streamflow and reservoir storage; development and application of models of rainfall-runoff relations, chemical quality, and sediment movement; and studies of the interactive processes of overland and channel flow with vegetation. Networks of streamgaging stations and (or) sampling sites within numerous drainage basins are yielding information that extends databases and enhances the ability to use those data for interpretive studies.
Further analytical study of hybrid rocket combustion
NASA Technical Reports Server (NTRS)
Hung, W. S. Y.; Chen, C. S.; Haviland, J. K.
1972-01-01
Analytical studies of the transient and steady-state combustion processes in a hybrid rocket system are discussed. The particular system chosen consists of a gaseous oxidizer flowing within a tube of solid fuel, resulting in a heterogeneous combustion. Finite rate chemical kinetics with appropriate reaction mechanisms were incorporated in the model. A temperature dependent Arrhenius type fuel surface regression rate equation was chosen for the current study. The governing mathematical equations employed for the reacting gas phase and for the solid phase are the general, two-dimensional, time-dependent conservation equations in a cylindrical coordinate system. Keeping the simplifying assumptions to a minimum, these basic equations were programmed for numerical computation, using two implicit finite-difference schemes, the Lax-Wendroff scheme for the gas phase, and, the Crank-Nicolson scheme for the solid phase.
Structure and Function of Mammalian Carbohydrate-Lectin Interactions
NASA Astrophysics Data System (ADS)
Anderson, Kevin; Evers, David; Rice, Kevin G.
Over the past three decades the field of glycobiology has expanded beyond a basic understanding of the structure and biosynthesis of glycoprotein, proteoglycans, and glycolipids toward a more detailed picture of how these molecules afford communication through binding to mammalian lectins. Although the number of different mammalian lectin domains appears to be finite and even much smaller than early estimates predicated based on the diversity of glycan structures, nature appears capable of using these in numerous combinations to fine tune specificity. The following provides an overview of the major classes of mammalian lectins and discusses their glycan binding specificity. The review provides a snapshot of the field of glycobiology that continues to grow providing an increasing number of examples of biological processes that rely upon glycan-lectin binding.
An introduction to three-dimensional climate modeling
NASA Technical Reports Server (NTRS)
Washington, W. M.; Parkinson, C. L.
1986-01-01
The development and use of three-dimensional computer models of the earth's climate are discussed. The processes and interactions of the atmosphere, oceans, and sea ice are examined. The basic theory of climate simulation which includes the fundamental equations, models, and numerical techniques for simulating the atmosphere, oceans, and sea ice is described. Simulated wind, temperature, precipitation, ocean current, and sea ice distribution data are presented and compared to observational data. The responses of the climate to various environmental changes, such as variations in solar output or increases in atmospheric carbon dioxide, are modeled. Future developments in climate modeling are considered. Information is also provided on the derivation of the energy equation, the finite difference barotropic forecast model, the spectral transform technique, and the finite difference shallow water waved equation model.
Traditional Chinese Biotechnology
NASA Astrophysics Data System (ADS)
Xu, Yan; Wang, Dong; Fan, Wen Lai; Mu, Xiao Qing; Chen, Jian
The earliest industrial biotechnology originated in ancient China and developed into a vibrant industry in traditional Chinese liquor, rice wine, soy sauce, and vinegar. It is now a significant component of the Chinese economy valued annually at about 150 billion RMB. Although the production methods had existed and remained basically unchanged for centuries, modern developments in biotechnology and related fields in the last decades have greatly impacted on these industries and led to numerous technological innovations. In this chapter, the main biochemical processes and related technological innovations in traditional Chinese biotechnology are illustrated with recent advances in functional microbiology, microbial ecology, solid-state fermentation, enzymology, chemistry of impact flavor compounds, and improvements made to relevant traditional industrial facilities. Recent biotechnological advances in making Chinese liquor, rice wine, soy sauce, and vinegar are reviewed.
NASA Astrophysics Data System (ADS)
Sever, Gokhan
A series of systematic two/three-dimensional (2D/3D) idealized numerical experiments were conducted to investigate the combined effects of dynamical and physical processes on orographic precipitation (OP) with varying incoming basic flow speed (U) and CAPE in a conditionally unstable uniform flow. The three moist flow regimes identified by Chu and Lin are reproduced using the CM1 model in low resolution (Deltax = 1 km) 2D simulations. A new flow regime, namely Regime IV (U > 36 m s-1) is characterized by gravity waves, heavy precipitation, lack of upper-level wave breaking and turbulence over the lee slope. The regime transition from III to IV at about 36 m s -1 can be explained by the transition from upward propagating gravity waves to evanescent flow, which can be predicted using a moist mountain wave theory. Although the basic features are captured well in low grid resolutions, high resolution (Deltax = 100 m) 2D/3D simulations are required to resolve precipitation distribution and intensity at higher basic winds (U > 30 m s -1). These findings may be applied to examine the performance of moist and turbulence parameterization schemes. Based on 3D simulations, gravity wave-induced severe downslope winds and turbulent mixing within hydraulic jump reduce OP in Regime III. Then in Regime IV, precipitation amount and spatial extent are intensified as the upper-level wave breaking vanishes and updrafts strengthen. Similar experiments were performed with a low CAPE sounding to assess the evolution of OP in an environment similar to that observed in tropical cyclones. These low CAPE simulations show that precipitation is nearly doubled at high wind speeds compared to high CAPE results. Based on a microphysics budget analysis, two factors are identified to explain this difference: 1) warm-rain formation processes (auto-conversion and accretion), which are more effective in low CAPE environment, and 2) even though rain production (via graupel and snow melting) is intense in high CAPE, strong downdrafts and advection induced evaporation tend to deplete precipitation before reaching the ground. Overall, both in 2D/3D high wind speed simulations, the pattern of the precipitation distribution resembles to the bell-shaped mountain profile with maximum located over the mountain peak. This result has a potential to simplify the parameterization of OP in terms of two control parameters and might applicable to global weather and climate modeling.
Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Sethian, James A.
1997-01-01
Borrowing from techniques developed for conservation law equations, numerical schemes which discretize the Hamilton-Jacobi (H-J), level set, and Eikonal equations on triangulated domains are presented. The first scheme is a provably monotone discretization for certain forms of the H-J equations. Unfortunately, the basic scheme lacks proper Lipschitz continuity of the numerical Hamiltonian. By employing a virtual edge flipping technique, Lipschitz continuity of the numerical flux is restored on acute triangulations. Next, schemes are introduced and developed based on the weaker concept of positive coefficient approximations for homogeneous Hamiltonians. These schemes possess a discrete maximum principle on arbitrary triangulations and naturally exhibit proper Lipschitz continuity of the numerical Hamiltonian. Finally, a class of Petrov-Galerkin approximations are considered. These schemes are stabilized via a least-squares bilinear form. The Petrov-Galerkin schemes do not possess a discrete maximum principle but generalize to high order accuracy.
Math, monkeys, and the developing brain.
Cantlon, Jessica F
2012-06-26
Thirty thousand years ago, humans kept track of numerical quantities by carving slashes on fragments of bone. It took approximately 25,000 y for the first iconic written numerals to emerge among human cultures (e.g., Sumerian cuneiform). Now, children acquire the meanings of verbal counting words, Arabic numerals, written number words, and the procedures of basic arithmetic operations, such as addition and subtraction, in just 6 y (between ages 2 and 8). What cognitive abilities enabled our ancestors to record tallies in the first place? Additionally, what cognitive abilities allow children to rapidly acquire the formal mathematics knowledge that took our ancestors many millennia to invent? Current research aims to discover the origins and organization of numerical information in humans using clues from child development, the organization of the human brain, and animal cognition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowery, P.S.; Lessor, D.L.
Waste glass melter and in situ vitrification (ISV) processes represent the combination of electrical thermal, and fluid flow phenomena to produce a stable waste-from product. Computational modeling of the thermal and fluid flow aspects of these processes provides a useful tool for assessing the potential performance of proposed system designs. These computations can be performed at a fraction of the cost of experiment. Consequently, computational modeling of vitrification systems can also provide and economical means for assessing the suitability of a proposed process application. The computational model described in this paper employs finite difference representations of the basic continuum conservationmore » laws governing the thermal, fluid flow, and electrical aspects of the vitrification process -- i.e., conservation of mass, momentum, energy, and electrical charge. The resulting code is a member of the TEMPEST family of codes developed at the Pacific Northwest Laboratory (operated by Battelle for the US Department of Energy). This paper provides an overview of the numerical approach employed in TEMPEST. In addition, results from several TEMPEST simulations of sample waste glass melter and ISV processes are provided to illustrate the insights to be gained from computational modeling of these processes. 3 refs., 13 figs.« less
NASA Astrophysics Data System (ADS)
Li, H; Yang, H; Zhan, M
2009-04-01
Thin-walled tube bending is an advanced technology for producing precision bent tube parts in aerospace, aviation and automobiles, etc. With increasing demands of bending tubes with a larger tube diameter and a smaller bending radius, wrinkling instability is a critical issue to be solved urgently for improving the bending limit and forming quality in this process. In this study, by using the energy principle, combined with analytical and finite element (FE) numerical methods, an energy-based wrinkling prediction model for thin-walled tube bending is developed. A segment shell model is proposed to consider the critical wrinkling region, which captures the deformation features of the tube bending process. The dissipation energy created by the reaction forces at the tube-dies interface for restraining the compressive instability is also included in the prediction model, which can be numerically calculated via FE simulation. The validation of the model is performed and its physical significance is evaluated from various aspects. Then the plastic wrinkling behaviors in thin-walled tube bending are addressed. From the energy viewpoint, the effect of the basic parameters including the geometrical and material parameters on the onset of wrinkling is identified. In particular, the influence of multi-tools constraints such as clearance and friction at various interfaces on the wrinkling instability is obtained. The study provides instructive understanding of the plastic wrinkling instability and the model may be suitable for the wrinkling prediction of a doubly-curved shell in the complex forming process with contact conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sherman, G.J.; Zmierski, M.L.
1994-09-01
US Steel Iron Producing Div. consists of four operating blast furnaces ranging in process control capabilities from 1950's and 1960's era hardware to state of the art technology. The oldest control system consists of a large number of panels containing numerous relays, indicating lights, selector switches, push buttons, analog controllers, strip chart recorders and annunciators. In contrast, the state of the art control system utilizes remote I/O, two sets of redundant PLC's, redundant charge director computer, redundant distributed control system, high resolution video-graphic display system and supervisory computer for real-time data acquisition. Process data are collected and archived on twomore » DEC VAX computers, one for No. 13 blast furnace and the other for the three south end furnaces. Historical trending, data analysis and reporting are available to iron producing personnel through terminals and PC's connected directly to the systems, dial-up modems and various network configurations. These two machines are part of the iron producing network which allows them to pass and receive information from each other as well as numerous other sources throughout the division. This configuration allows personnel to access most pertinent furnace information from a single source. The basic objective of the control systems is to charge raw materials to the top of the furnace at aim weights and sequence, while maintaining blast conditions at the bottom of the furnace at required temperature, pressure and composition. Control changes by the operators are primarily supervisory based on review of system generated plots and tables.« less
Holloway, Ian D; Battista, Christian; Vogel, Stephan E; Ansari, Daniel
2013-03-01
The ability to process the numerical magnitude of sets of items has been characterized in many animal species. Neuroimaging data have associated this ability to represent nonsymbolic numerical magnitudes (e.g., arrays of dots) with activity in the bilateral parietal lobes. Yet the quantitative abilities of humans are not limited to processing the numerical magnitude of nonsymbolic sets. Humans have used this quantitative sense as the foundation for symbolic systems for the representation of numerical magnitude. Although numerical symbol use is widespread in human cultures, the brain regions involved in processing of numerical symbols are just beginning to be understood. Here, we investigated the brain regions underlying the semantic and perceptual processing of numerical symbols. Specifically, we used an fMRI adaptation paradigm to examine the neural response to Hindu-Arabic numerals and Chinese numerical ideographs in a group of Chinese readers who could read both symbol types and a control group who could read only the numerals. Across groups, the Hindu-Arabic numerals exhibited ratio-dependent modulation in the left IPS. In contrast, numerical ideographs were associated with activation in the right IPS, exclusively in the Chinese readers. Furthermore, processing of the visual similarity of both digits and ideographs was associated with activation of the left fusiform gyrus. Using culture as an independent variable, we provide clear evidence for differences in the brain regions associated with the semantic and perceptual processing of numerical symbols. Additionally, we reveal a striking difference in the laterality of parietal activation between the semantic processing of the two symbols types.
Gągol, Michał; Przyjazny, Andrzej; Boczkaj, Grzegorz
2018-07-01
Cavitation has become on the most often applied methods in a number of industrial technologies. In the case of oxidation of organic pollutants occurring in the aqueous medium, cavitation forms the basis of numerous advanced oxidation processes (AOPs). This paper presents the results of investigations on the efficiency of oxidation of the following groups of organic compounds: organosulfur, nitro derivatives of benzene, BTEX, and phenol and its derivatives in a basic model effluent using hydrodynamic and acoustic cavitation combined with external oxidants, i.e., hydrogen peroxide, ozone and peroxone. The studies revealed that the combination of cavitation with additional oxidants allows 100% oxidation of the investigated model compounds. However, individual treatments differed with respect to the rate of degradation. Hydrodynamic cavitation aided by peroxone was found to be the most effective treatment (100% oxidation of all the investigated compounds in 60 min). When using hydrodynamic and acoustic cavitation alone, the effectiveness of oxidation was diversified. Under these conditions, nitro derivatives of benzene and phenol and its derivatives were found to be resistant to oxidation. In addition, hydrodynamic cavitation was found to be more effective in degradation of model compounds than acoustic cavitation. The results of investigations presented in this paper compare favorably with the investigations on degradation of organic contaminants using AOPs under conditions of basic pH published thus far. Copyright © 2018 Elsevier B.V. All rights reserved.
MINE DESIGN, OPERATIONS & CLOSURE CONFERENCE 2005
A one-day short course will instill the usefulness of environmental modeling with respect to understanding mining-related impacts. It will focus on the development aspects of modeling rather than the numerical computations. The course will encompass the basics of sensitivity anal...
ERIC Educational Resources Information Center
Wilson, Paul
The numerous forms filed with the Federal Communications Commission (FCC) provide information about a variety of topics. Basic licensing information that is available concerns engineering, ownership, and equal employment opportunity. The FCC's broadcast bureau collects information about programing, the ascertainment of community needs, public…
Numerical Optimization Algorithms and Software for Systems Biology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, Michael
2013-02-02
The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.
Risk Aversion and the Value of Information.
ERIC Educational Resources Information Center
Eeckhoudt, Louis; Godfroid, Phillippe
2000-01-01
Explains why risk aversion does not always induce a greater information value, but instead may induce a lower information value when increased. Presents a basic model defining the concept of perfect information value and providing a numerical illustration. Includes references. (CMK)
Optimal post-experiment estimation of poorly modeled dynamic systems
NASA Technical Reports Server (NTRS)
Mook, D. Joseph
1988-01-01
Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, E.A.; Smed, P.F.; Bryndum, M.B.
The paper describes the numerical program, PIPESIN, that simulates the behavior of a pipeline placed on an erodible seabed. PIPEline Seabed INteraction from installation until a stable pipeline seabed configuration has occurred is simulated in the time domain including all important physical processes. The program is the result of the joint research project, ``Free Span Development and Self-lowering of Offshore Pipelines`` sponsored by EU and a group of companies and carried out by the Danish Hydraulic Institute and Delft Hydraulics. The basic modules of PIPESIN are described. The description of the scouring processes has been based on and verified throughmore » physical model tests carried out as part of the research project. The program simulates a section of the pipeline (typically 500 m) in the time domain, the main input being time series of the waves and current. The main results include predictions of the onset of free spans, their length distribution, their variation in time, and the lowering of the pipeline as function of time.« less
NASA Astrophysics Data System (ADS)
Rodriguez Lucatero, C.; Schaum, A.; Alarcon Ramos, L.; Bernal-Jaquez, R.
2014-07-01
In this study, the dynamics of decisions in complex networks subject to external fields are studied within a Markov process framework using nonlinear dynamical systems theory. A mathematical discrete-time model is derived using a set of basic assumptions regarding the convincement mechanisms associated with two competing opinions. The model is analyzed with respect to the multiplicity of critical points and the stability of extinction states. Sufficient conditions for extinction are derived in terms of the convincement probabilities and the maximum eigenvalues of the associated connectivity matrices. The influences of exogenous (e.g., mass media-based) effects on decision behavior are analyzed qualitatively. The current analysis predicts: (i) the presence of fixed-point multiplicity (with a maximum number of four different fixed points), multi-stability, and sensitivity with respect to the process parameters; and (ii) the bounded but significant impact of exogenous perturbations on the decision behavior. These predictions were verified using a set of numerical simulations based on a scale-free network topology.
Hunt, Pamela S.; Burk, Joshua A.; Barnet, Robert C.
2016-01-01
Adolescence is a time of critical brain changes that pave the way for adult learning processes. However, the extent to which learning in adolescence is best characterized as a transitional linear progression from childhood to adulthood, or represents a period that differs from earlier and later developmental stages, remains unclear. Here we examine behavioral literature on associative fear conditioning and complex choice behavior with rodent models. Many aspects of fear conditioning are intact by adolescence and do not differ from adult patterns. Sufficient evidence, however, suggests that adolescent learning cannot be characterized simply as an immature precursor to adulthood. Across different paradigms assessing choice behavior, literature suggests that adolescent animals typically display more impulsive patterns of responding compared to adults. The extent to which the development of basic conditioning processes serves as a scaffold for later adult decision making is an additional research area that is important for theory, but also has widespread applications for numerous psychological conditions. PMID:27339692
NASA Astrophysics Data System (ADS)
Böberg, L.; Brösa, U.
1988-09-01
Turbulence in a pipe is derived directly from the Navier-Stokes equation. Analysis of numerical simulations revealed that small disturbances called 'mothers' induce other much stronger disturbances called 'daughters'. Daughters determine the look of turbulence, while mothers control the transfer of energy from the basic flow to the turbulent motion. From a practical point of view, ruling mothers means ruling turbulence. For theory, the mother-daughter process represents a mechanism permitting chaotic motion in a linearly stable system. The mechanism relies on a property of the linearized problem according to which the eigenfunctions become more and more collinear as the Reynolds number increases. The mathematical methods are described, comparisons with experiments are made, mothers and daughters are analyzed, also graphically, with full particulars, and the systematic construction of small systems of differential equations to mimic the non-linear process by means as simple as possible is explained. We suggest that more then 20 but less than 180 essential degrees of freedom take part in the onset of turbulence.
Symbolic, Nonsymbolic and Conceptual: An Across-Notation Study on the Space Mapping of Numerals.
Zhang, Yu; You, Xuqun; Zhu, Rongjuan
2016-07-01
Previous studies suggested that there are interconnections between two numeral modalities of symbolic notation and nonsymbolic notation (array of dots), differences and similarities of the processing, and representation of the two modalities have both been found in previous research. However, whether there are differences between the spatial representation and numeral-space mapping of the two numeral modalities of symbolic notation and nonsymbolic notation is still uninvestigated. The present study aims to examine whether there are differences between the spatial representation and numeral-space mapping of the two numeral modalities of symbolic notation and nonsymbolic notation; especially how zero, as both a symbolic magnitude numeral and a nonsymbolic conceptual numeral, mapping onto space; and if the mapping happens automatically at an early stage of the numeral information processing. Results of the two experiments demonstrate that the low-level processing of symbolic numerals including zero and nonsymbolic numerals except zero can mapping onto space, whereas the low-level processing of nonsymbolic zero as a semantic conceptual numeral cannot mapping onto space, which indicating the specialty of zero in the numeral domain. The present study indicates that the processing of non-semantic numerals can mapping onto space, whereas semantic conceptual numerals cannot mapping onto space. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Gramelsberger, Gabriele
The scientific understanding of atmospheric processes has been rooted in the mechanical and physical view of nature ever since dynamic meteorology gained ground in the late 19th century. Conceiving the atmosphere as a giant 'air mass circulation engine' entails applying hydro- and thermodynamical theory to the subject in order to describe the atmosphere's behaviour on small scales. But when it comes to forecasting, it turns out that this view is far too complex to be computed. The limitation of analytical methods precludes an exact solution, forcing scientists to make use of numerical simulation. However, simulation introduces two prerequisites to meteorology: First, the partitioning of the theoretical view into two parts-the large-scale behaviour of the atmosphere, and the effects of smaller-scale processes on this large-scale behaviour, so-called parametrizations; and second, the dependency on computational power in order to achieve a higher resolution. The history of today's atmospheric circulation modelling can be reconstructed as the attempt to improve the handling of these basic constraints. It can be further seen as the old schism between theory and application under new circumstances, which triggers a new discussion about the question of how processes may be conceived in atmospheric modelling.
Role of Magnetic Reconnection in Heating Astrophysical Plasmas
NASA Astrophysics Data System (ADS)
Hammoud, M. M.; El Eid, M.; Darwish, M.; Dayeh, M. A.
2017-12-01
The description of plasma in the context of a fluid model reveals the important phenomenon of magnetic reconnection (MGR). This process is thought to be the cause of particle heating and acceleration in various astrophysical phenomena. Examples are geomagnetic storms, solar flares, or heating the solar corona, which is the focus of the present contribution. The magnetohydrodynamic approach (MHD) provides a basic description of MGR. However, the simulation of this process is rather challenging. Although it is not yet established whether waves or reconnection play the dominant role in heating the solar atmosphere, the present goal is to examine the tremendous increase of the temperature between the solar chromosphere and the corona in a very narrow transition region. Since we are dealing with very-high temperature plasma, the modeling of such heating process seems to require a two-fluid description consisting of ions and electrons. This treatment is an extension of the one-fluid model of resistive MHD that has been recently developed by [Hammoud et al., 2017] using the modern numerical openfoam toolbox. In this work, we outline the two-fluid approach using coronal conditions, show evidence of MGR in the two-fluid description, and investigate the temperature increase as a result of this MGR process.
Schmetz, Emilie; Magis, David; Detraux, Jean-Jacques; Barisnikov, Koviljka; Rousselle, Laurence
2018-03-02
The present study aims to assess how the processing of basic visual perceptual (VP) components (length, surface, orientation, and position) develops in typically developing (TD) children (n = 215, 4-14 years old) and adults (n = 20, 20-25 years old), and in children with cerebral palsy (CP) (n = 86, 5-14 years old) using the first four subtests of the Battery for the Evaluation of Visual Perceptual and Spatial processing in children. Experiment 1 showed that these four basic VP processes follow distinct developmental trajectories in typical development. Experiment 2 revealed that children with CP present global and persistent deficits for the processing of basic VP components when compared with TD children matched on chronological age and nonverbal reasoning abilities.
Beltrán-Navarro, Beatriz; Abreu-Mendoza, Roberto A; Matute, Esmeralda; Rosselli, Monica
2018-01-01
This article presents a tool for assessing the early numerical abilities of Spanish-speaking Mexican preschoolers. The Numerical Abilities Test, from the Evaluación Neuropsicológica Infantil-Preescolar (ENI-P), evaluates four core abilities of number development: magnitude comparison, counting, subitizing, and basic calculation. We evaluated 307 Spanish-speaking Mexican children aged 2 years 6 months to 4 years 11 months. Appropriate internal consistency and test-retest reliability were demonstrated. We also investigated the effect of age, children's school attendance, maternal education, and sex on children's numerical scores. The results showed that the four subtests captured development across ages. Critically, maternal education had an impact on children's performance in three out of the four subtests, but there was no effect associated with children's school attendance or sex. These results suggest that the Numerical Abilities Test is a reliable instrument for Spanish-speaking preschoolers. We discuss the implications of our outcomes for numerical development.
Mathematical model for HIV spreads control program with ART treatment
NASA Astrophysics Data System (ADS)
Maimunah; Aldila, Dipo
2018-03-01
In this article, using a deterministic approach in a seven-dimensional nonlinear ordinary differential equation, we establish a mathematical model for the spread of HIV with an ART treatment intervention. In a simplified model, when no ART treatment is implemented, disease-free and the endemic equilibrium points were established analytically along with the basic reproduction number. The local stability criteria of disease-free equilibrium and the existing criteria of endemic equilibrium were analyzed. We find that endemic equilibrium exists when the basic reproduction number is larger than one. From the sensitivity analysis of the basic reproduction number of the complete model (with ART treatment), we find that the increased number of infected humans who follow the ART treatment program will reduce the basic reproduction number. We simulate this result also in the numerical experiment of the autonomous system to show how treatment intervention impacts the reduction of the infected population during the intervention time period.
Patterns of linguistic and numerical performance in aphasia.
Rath, Dajana; Domahs, Frank; Dressel, Katharina; Claros-Salinas, Dolores; Klein, Elise; Willmes, Klaus; Krinzinger, Helga
2015-02-04
Empirical research on the relationship between linguistic and numerical processing revealed inconsistent results for different levels of cognitive processing (e.g., lexical, semantic) as well as different stimulus materials (e.g., Arabic digits, number words, letters, non-number words). Information of dissociation patterns in aphasic patients was used in order to investigate the dissociability of linguistic and numerical processes. The aim of the present prospective study was a comprehensive, specific, and systematic investigation of relationships between linguistic and numerical processing, considering the impact of asemantic vs. semantic processing and the type of material employed (numbers compared to letters vs. words). A sample of aphasic patients (n = 60) was assessed with a battery of linguistic and numerical tasks directly comparable for their cognitive processing levels (e.g., perceptual, morpho-lexical, semantic). Mean performance differences and frequencies of (complementary) dissociations in individual patients revealed the most prominent numerical advantage for asemantic tasks when comparing the processing of numbers vs. letters, whereas the least numerical advantage was found for semantic tasks when comparing the processing of numbers vs. words. Different patient subgroups showing differential dissociation patterns were further analysed and discussed. A comprehensive model of linguistic and numerical processing should take these findings into account.
GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations
Cardall, Christian Y.; Budiardja, Reuben D.
2015-06-11
Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less
Sokolowski, H Moriah; Fias, Wim; Bosah Ononye, Chuka; Ansari, Daniel
2017-10-01
It is currently debated whether numbers are processed using a number-specific system or a general magnitude processing system, also used for non-numerical magnitudes such as physical size, duration, or luminance. Activation likelihood estimation (ALE) was used to conduct the first quantitative meta-analysis of 93 empirical neuroimaging papers examining neural activation during numerical and non-numerical magnitude processing. Foci were compiled to generate probabilistic maps of activation for non-numerical magnitudes (e.g. physical size), symbolic numerical magnitudes (e.g. Arabic digits), and nonsymbolic numerical magnitudes (e.g. dot arrays). Conjunction analyses revealed overlapping activation for symbolic, nonsymbolic and non-numerical magnitudes in frontal and parietal lobes. Contrast analyses revealed specific activation in the left superior parietal lobule for symbolic numerical magnitudes. In contrast, small regions in the bilateral precuneus were specifically activated for nonsymbolic numerical magnitudes. No regions in the parietal lobes were activated for non-numerical magnitudes that were not also activated for numerical magnitudes. Therefore, numbers are processed using both a generalized magnitude system and format specific number regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Dongqin; Xu, Gang; Tang, Weijiang; Jing, Yanjun; Ji, Qiang; Fei, Zhangjun; Lin, Rongcheng
2013-01-01
The critical developmental switch from heterotrophic to autotrophic growth of plants involves light signaling transduction and the production of reactive oxygen species (ROS). ROS function as signaling molecules that regulate multiple developmental processes, including cell death. However, the relationship between light and ROS signaling remains unclear. Here, we identify transcriptional modules composed of the basic helix-loop-helix and bZIP transcription factors PHYTOCHROME-INTERACTING FACTOR1 (PIF1), PIF3, ELONGATED HYPOCOTYL5 (HY5), and HY5 HOMOLOGY (HYH) that bridge light and ROS signaling to regulate cell death and photooxidative response. We show that pif mutants release more singlet oxygen and exhibit more extensive cell death than the wild type during Arabidopsis thaliana deetiolation. Genome-wide expression profiling indicates that PIF1 represses numerous ROS and stress-related genes. Molecular and biochemical analyses reveal that PIF1/PIF3 and HY5/HYH physically interact and coordinately regulate the expression of five ROS-responsive genes by directly binding to their promoters. Furthermore, PIF1/PIF3 and HY5/HYH function antagonistically during the seedling greening process. In addition, phytochromes, cryptochromes, and CONSTITUTIVE PHOTOMORPHOGENIC1 act upstream to regulate ROS signaling. Together, this study reveals that the PIF1/PIF3-HY5/HYH transcriptional modules mediate crosstalk between light and ROS signaling and sheds light on a new mechanism by which plants adapt to the light environments. PMID:23645630
NASA Astrophysics Data System (ADS)
Korpusik, Adam
2017-02-01
We present a nonstandard finite difference scheme for a basic model of cellular immune response to viral infection. The main advantage of this approach is that it preserves the essential qualitative features of the original continuous model (non-negativity and boundedness of the solution, equilibria and their stability conditions), while being easy to implement. All of the qualitative features are preserved independently of the chosen step-size. Numerical simulations of our approach and comparison with other conventional simulation methods are presented.
Basic linear algebra subprograms for FORTRAN usage
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Hanson, R. J.; Kincaid, D. R.; Krogh, F. T.
1977-01-01
A package of 38 low level subprograms for many of the basic operations of numerical linear algebra is presented. The package is intended to be used with FORTRAN. The operations in the package are dot products, elementary vector operations, Givens transformations, vector copy and swap, vector norms, vector scaling, and the indices of components of largest magnitude. The subprograms and a test driver are available in portable FORTRAN. Versions of the subprograms are also provided in assembly language for the IBM 360/67, the CDC 6600 and CDC 7600, and the Univac 1108.
NASA Technical Reports Server (NTRS)
Demerdash, N. A. O.
1976-01-01
The modes of operation of the brushless d.c. machine and its corresponding characteristics (current flow, torque-position, etc.) are presented. The foundations and basic principles on which the preliminary numerical model is based, are discussed.
Gene expression analysis between planktonic and biofilm states of Flavobacterium columnare
USDA-ARS?s Scientific Manuscript database
Flavobacterium columnare, the causative agent of columnaris disease causes substantial mortality worldwide in numerous freshwater finfish species. Due to its global significance and impact on the aquaculture industry continual efforts to better understand basic mechanisms that contribute to disease ...
Sickeningly sweet: L-rhamnose stimulates Flavobacterium columnare biofilm formation and virulence
USDA-ARS?s Scientific Manuscript database
Flavobacterium columnare, the causative agent of columnaris disease causes substantial mortality worldwide in numerous freshwater finfish species. Due to its global significance and impact on the aquaculture industry continual efforts to better understand basic mechanisms that contribute to disease ...
USDA-ARS?s Scientific Manuscript database
Flavobacterium columnare, the causative agent of columnaris disease causes substantial mortality worldwide in numerous freshwater finfish species. Due to its global significance and impact on the aquaculture industry, continual efforts to better understand basic mechanisms that contribute to disease...
Sharifi, Hamid; Larouche, Daniel
2014-01-01
To study the variation of the mechanical behavior of binary aluminum copper alloys with respect to their microstructure, a numerical simulation of their granular structure was carried out. The microstructures are created by a repeated inclusion of some predefined basic grain shapes into a representative volume element until reaching a given volume percentage of the α-phase. Depending on the grain orientations, the coalescence of the grains can be performed. Different granular microstructures are created by using different basic grain shapes. Selecting a suitable set of basic grain shapes, the modeled microstructure exhibits a realistic aluminum alloy microstructure which can be adapted to a particular cooling condition. Our granular models are automatically converted to a finite element model. The effect of grain shapes and sizes on the variation of elastic modulus and plasticity of such a heterogeneous domain was investigated. Our results show that for a given α-phase fraction having different grain shapes and sizes, the elastic moduli and yield stresses are almost the same but the ultimate stress and elongation are more affected. Besides, we realized that the distribution of the θ phases inside the α phases is more important than the grain shape itself. PMID:28788607
Drift-based scrape-off particle width in X-point geometry
NASA Astrophysics Data System (ADS)
Reiser, D.; Eich, T.
2017-04-01
The Goldston heuristic estimate of the scrape-off layer width (Goldston 2012 Nucl. Fusion 52 013009) is reconsidered using a fluid description for the plasma dynamics. The basic ingredient is the inclusion of a compressible diamagnetic drift for the particle cross field transport. Instead of testing the heuristic model in a sophisticated numerical simulation including several physical mechanisms working together, the purpose of this work is to point out basic consequences for a drift-dominated cross field transport using a reduced fluid model. To evaluate the model equations and prepare them for subsequent numerical solution a specific analytical model for 2D magnetic field configurations with X-points is employed. In a first step parameter scans in high-resolution grids for isothermal plasmas are done to assess the basic formulas of the heuristic model with respect to the functional dependence of the scrape-off width on the poloidal magnetic field and plasma temperature. Particular features in the 2D-fluid calculations—especially the appearance of supersonic parallel flows and shock wave like bifurcational jumps—are discussed and can be understood partly in the framework of a reduced 1D model. The resulting semi-analytical findings might give hints for experimental proof and implementation in more elaborated fluid simulations.
Thermal radiative properties: Nonmetallic solids.
NASA Technical Reports Server (NTRS)
Touloukian, Y. S.; Dewitt, D. P.
1972-01-01
The volume consists of a text on theory, estimation, and measurement, together with its bibliography, the main body of numerical data and its references, and the material index. The text material assumes a role complementary to the main body of numerical data. The physics and basic concepts of thermal radiation are discussed in detail, focusing attention on treatment of nonmetallic materials: theory, estimation, and methods of measurement. Numerical data is presented in a comprehensive manner. The scope of coverage includes the nonmetallic elements and their compounds, intermetallics, polymers, glasses, and minerals. Analyzed data graphs provide an evaluative review of the data. All data have been obtained from their original sources, and each data set is so referenced.
NASA Astrophysics Data System (ADS)
Cazzani, Antonio; Malagù, Marcello; Turco, Emilio
2016-03-01
We illustrate a numerical tool for analyzing plane arches such as those frequently used in historical masonry heritage. It is based on a refined elastic mechanical model derived from the isogeometric approach. In particular, geometry and displacements are modeled by means of non-uniform rational B-splines. After a brief introduction, outlining the basic assumptions of this approach and the corresponding modeling choices, several numerical applications to arches, which are typical of masonry structures, show the performance of this novel technique. These are discussed in detail to emphasize the advantage and potential developments of isogeometric analysis in the field of structural analysis of historical masonry buildings with complex geometries.
Revisiting the Rossby Haurwitz wave test case with contour advection
NASA Astrophysics Data System (ADS)
Smith, Robert K.; Dritschel, David G.
2006-09-01
This paper re-examines a basic test case used for spherical shallow-water numerical models, and underscores the need for accurate, high resolution models of atmospheric and ocean dynamics. The Rossby-Haurwitz test case, first proposed by Williamson et al. [D.L. Williamson, J.B. Drake, J.J. Hack, R. Jakob, P.N. Swarztrauber, A standard test set for numerical approximations to the shallow-water equations on the sphere, J. Comput. Phys. (1992) 221-224], has been examined using a wide variety of shallow-water models in previous papers. Here, two contour-advective semi-Lagrangian (CASL) models are considered, and results are compared with previous test results. We go further by modifying this test case in a simple way to initiate a rapid breakdown of the basic wave state. This breakdown is accompanied by the formation of sharp potential vorticity gradients (fronts), placing far greater demands on the numerics than the original test case does. We also go further by examining other dynamical fields besides the height and potential vorticity, to assess how well the models deal with gravity waves. Such waves are sensitive to the presence or not of sharp potential vorticity gradients, as well as to numerical parameter settings. In particular, large time steps (convenient for semi-Lagrangian schemes) can seriously affect gravity waves but can also have an adverse impact on the primary fields of height and velocity. These problems are exacerbated by a poor resolution of potential vorticity gradients.
NASA Astrophysics Data System (ADS)
de Medeiros, Ricardo; Sartorato, Murilo; Vandepitte, Dirk; Tita, Volnei
2016-11-01
The basic concept of the vibration based damage identification methods is that the dynamic behaviour of a structure can change if damage occurs. Damage in a structure can alter the structural integrity, and therefore, the physical properties like stiffness, mass and/or damping may change. The dynamic behaviour of a structure is a function of these physical properties and will, therefore, directly be affected by the damage. The dynamic behaviour can be described in terms of time, frequency and modal domain parameters. The changes in these parameters (or properties derived from these parameters) are used as indicators of damage. Hence, this work has two main objectives. The first one is to provide an overview of the structural vibration based damage identification methods. For this purpose, a fundamental description of the structural vibration based damage identification problem is given, followed by a short literature overview of the damage features, which are commonly addressed. The second objective is to create a damage identification method for detection of the damage in composite structures. To aid in this process, two basic principles are discussed, namely the effect of the potential damage case on the dynamic behaviour, and the consequences involved with the information reduction in the signal processing. Modal properties from the structural dynamic output response are obtained. In addition, experimental and computational results are presented for the application of modal analysis techniques applied to composite specimens with and without damage. The excitation of the structures is performed using an impact hammer and, for measuring the output data, accelerometers as well as piezoelectric sensors. Finite element models are developed by shell elements, and numerical results are compared to experimental data, showing good correlation for the response of the specimens in some specific frequency range. Finally, FRFs are analysed using suitable metrics, including a new one, which are compared in terms of their capability for damage identification. The experimental and numerical results show that the vibration-based damage methods combined to the metrics can be used in Structural Health Monitoring (SHM) systems to identify the damage in the structure.
Biases and regularities of grapheme-colour associations in Japanese nonsynaesthetic population.
Nagai, Jun-ichi; Yokosawa, Kazuhiko; Asano, Michiko
2016-01-01
Associations between graphemes and colours in a nonsynaesthetic Japanese population were investigated. Participants chose the most suitable colour from 11 basic colour terms for each of 40 graphemes from the four categories of graphemes used in the Japanese language (kana characters, English alphabet letters, and Arabic and kanji numerals). This test was repeated after a three-week interval. In their responses, which were not as temporally consistent as those of grapheme-colour synaesthetes, participants showed biases and regularities that were comparable to those of synaesthetes reported in past studies. Although it has been believed that only synaesthetes, and not nonsynaesthetes, tended to associate graphemes with colours based on grapheme frequency, Berlin and Kay's colour typology, and colour word frequency, participants in this study tended in part to associate graphemes with colours based on the above factors. Moreover, participants that were nonsynaesthetes tended to associate different graphemes that shared sounds and/or meanings (e.g., Arabic and kanji numerals representing the same number) with the same colours, which was analogous to the findings in Japanese synaesthetes. These results support the view that grapheme-colour synaesthesia might have its origins in cross-modal association processes that are shared with the general population.
Water & Sanitation: An Essential Battlefront in the War on Antimicrobial Resistance.
Bürgmann, Helmut; Frigon, Dominic; Gaze, William; Manaia, Célia; Pruden, Amy; Singer, Andrew C; Smets, Barth; Zhang, Tong
2018-06-05
Water and sanitation represents a key battlefront in combating the spread of antimicrobial resistance (AMR). Basic water sanitation infrastructure is an essential first step to protecting public health, thereby limiting the spread of pathogens and the need for antibiotics. AMR presents unique human health risks, meriting new risk assessment frameworks specifically adapted to water and sanitation-borne AMR. There are numerous exposure routes to AMR originating from human waste, each of which must be quantified for its relative risk to human health. Wastewater treatment plants (WWTPs) play a vital role in centralized collection and treatment of human sewage, but there are numerous unresolved questions in terms of the microbial ecological processes occurring within and the extent to which they attenuate or amplify AMR. Research is needed to advance understanding of the fate of resistant bacteria and antibiotic resistance genes (ARGs) in various waste management systems, depending on the local constraints and intended re-use applications. WHO and national AMR action plans would benefit from a more holistic 'One Water' understanding. Here we provide a framework for research, policy, practice, and public engagement aimed at limiting the spread of AMR from water and sanitation in both low-, medium- and high-income countries, alike.
NASA Astrophysics Data System (ADS)
Fay, R.; Kreuzer, D.; Liebich, R.; Wiedemann, T.; Werner, S.
2018-07-01
Brush seals are an efficient alternative for labyrinth seals in turbomachinery. Brush seals show on the one hand a better leakage reduction in relation to their axial length and hence allow a shorter design of the machinery. On the other hand, the particularly small gap between bristles and the engine shaft increases the risk of rotor-stator-contact. The flexible brush seals induces basically light-rubs that in some cases might lead to spiral vibrations and thermal mechanical instabilities. Spiral vibrations are caused by a thermal deflection of the rotor induced by a heat flow into the shaft. To predict areas of instabilites during the design process a tool was developed at the Berlin Institute of Technology. The model combines a rotor dynamic model and a thermal model. The thermal system is reduced using a stationary solution, so that the final system, on which the stability analysis is performed, is comparable to the established Kellenberger model. The paper presents the numerical model for the predictions of unstable regions depending on rotational speed. This is illustrated by means of an example of an axial compressor manufactured by MAN Diesel & Turbo.
Landerl, Karin
2013-01-01
Numerical processing has been demonstrated to be closely associated with arithmetic skills, however, our knowledge on the development of the relevant cognitive mechanisms is limited. The present longitudinal study investigated the developmental trajectories of numerical processing in 42 children with age-adequate arithmetic development and 41 children with dyscalculia over a 2-year period from beginning of Grade 2, when children were 7; 6 years old, to beginning of Grade 4. A battery of numerical processing tasks (dot enumeration, non-symbolic and symbolic comparison of one- and two-digit numbers, physical comparison, number line estimation) was given five times during the study (beginning and middle of each school year). Efficiency of numerical processing was a very good indicator of development in numerical processing while within-task effects remained largely constant and showed low long-term stability before middle of Grade 3. Children with dyscalculia showed less efficient numerical processing reflected in specifically prolonged response times. Importantly, they showed consistently larger slopes for dot enumeration in the subitizing range, an untypically large compatibility effect when processing two-digit numbers, and they were consistently less accurate in placing numbers on a number line. Thus, we were able to identify parameters that can be used in future research to characterize numerical processing in typical and dyscalculic development. These parameters can also be helpful for identification of children who struggle in their numerical development. PMID:23898310
Cooperative Education Is a Superior Strategy for Using Basic Learning Processes.
ERIC Educational Resources Information Center
Reed, V. Gerald
Cooperative education is a learning strategy that fits very well with basic laws of learning. In fact, several basic important learning processes are far better adapted to the cooperative education strategy than to methods that lean entirely on classroom instruction. For instance, cooperative education affords more opportunities for reinforcement,…
Excitatory signal flow and connectivity in a cortical column: focus on barrel cortex.
Lübke, Joachim; Feldmeyer, Dirk
2007-07-01
A basic feature of the neocortex is its organization in functional, vertically oriented columns, recurring modules of signal processing and a system of transcolumnar long-range horizontal connections. These columns, together with their network of neurons, present in all sensory cortices, are the cellular substrate for sensory perception in the brain. Cortical columns contain thousands of neurons and span all cortical layers. They receive input from other cortical areas and subcortical brain regions and in turn their neurons provide output to various areas of the brain. The modular concept presumes that the neuronal network in a cortical column performs basic signal transformations, which are then integrated with the activity in other networks and more extended brain areas. To understand how sensory signals from the periphery are transformed into electrical activity in the neocortex it is essential to elucidate the spatial-temporal dynamics of cortical signal processing and the underlying neuronal 'microcircuits'. In the last decade the 'barrel' field in the rodent somatosensory cortex, which processes sensory information arriving from the mysticial vibrissae, has become a quite attractive model system because here the columnar structure is clearly visible. In the neocortex and in particular the barrel cortex, numerous neuronal connections within or between cortical layers have been studied both at the functional and structural level. Besides similarities, clear differences with respect to both physiology and morphology of synaptic transmission and connectivity were found. It is therefore necessary to investigate each neuronal connection individually, in order to develop a realistic model of neuronal connectivity and organization of a cortical column. This review attempts to summarize recent advances in the study of individual microcircuits and their functional relevance within the framework of a cortical column, with emphasis on excitatory signal flow.
Medical innovation then and now: perspectives of innovators responsible for transformative drugs.
Xu, Shuai; Kesselheim, Aaron S
2014-01-01
Effective medical innovation is a common goal of policymakers, physicians, researchers, and patients both in the private and public sectors. With the recent slowdown in approval of new transformative prescription drugs, many have looked back to the "golden years" of the 1980s and 1990s when numerous breakthrough products emerged. We conducted a qualitative study of innovators (n=127) directly involved in creation of groundbreaking drugs during that era to determine what made their work successful and how the process of conducting medical innovation has changed over the past 3 decades. Transcripts were analyzed using standard coding techniques and the constant comparative method of qualitative data analysis to identify the positive features of and challenges posed by the past and present therapeutic innovation environments (70 of the 127 interviewees explicitly addressed these issues). Interviewees emphasized the continued central role played by individuals and the institutions they were a part of in driving innovation. In addition, respondents discussed the importance of collaboration between individuals and institutions to share resources and expertise. Strong underlying basic science was also cited to be a major contributing factor to the success of an innovation. The climate for modern-day medical innovation involves a greater emphasis on patenting in academia, difficulty negotiating the technology transfer process, and funding constraints. Regulatory demands or reimbursement concerns were not commonly cited as factors that influenced transformative innovation. This study suggests that generating future transformative innovation will require a simplification of the current technology transfer process, continued commitment to basic science research, and policy changes that promote meaningful collaboration between individuals from disparate institutions. © 2014 American Society of Law, Medicine & Ethics, Inc.
Considerations in the Derivation of Water Quality Criteria for Endocrine-disrupting Chemicals
When the USEPA’s 1985 guidelines for deriving numerical water quality criteria (WQC) for the protection of aquatic life were developed there was little anticipation that endocrine-disrupting chemicals (EDCs) would be come a widespread environmental issue. While the basic guidelin...
Orthopositronium Lifetime: Analytic Results in O({alpha}) and O({alpha}{sup 3}ln{alpha})
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kniehl, Bernd A.; Kotikov, Anatoly V.; Veretin, Oleg L.
2008-11-07
We present the O({alpha}) and O({alpha}{sup 3}ln{alpha}) corrections to the total decay width of orthopositronium in closed analytic form, in terms of basic irrational numbers, which can be evaluated numerically to arbitrary precision.
ERIC Educational Resources Information Center
Western Sydney Inst. of TAFE, Blacktown (Australia).
This resource includes units of work developed by different practitioners that integrate the teaching of literacy with the teaching of numeracy in adult basic education. It is designed to provide models of integration for teachers to develop similar resources on different contexts or themes. The units follow slightly different formats. Unit…