Current and Future Flight Operating Systems
NASA Technical Reports Server (NTRS)
Cudmore, Alan
2007-01-01
This viewgraph presentation reviews the current real time operating system (RTOS) type in use with current flight systems. A new RTOS model is described, i.e. the process model. Included is a review of the challenges of migrating from the classic RTOS to the Process Model type.
Integrated urban systems model with multiple transportation supply agents.
DOT National Transportation Integrated Search
2012-10-01
This project demonstrates the feasibility of developing quantitative models that can forecast future networks under : current and alternative transportation planning processes. The current transportation planning process is modeled : based on empiric...
An Automated Method for High-Definition Transcranial Direct Current Stimulation Modeling*
Huang, Yu; Su, Yuzhuo; Rorden, Christopher; Dmochowski, Jacek; Datta, Abhishek; Parra, Lucas C.
2014-01-01
Targeted transcranial stimulation with electric currents requires accurate models of the current flow from scalp electrodes to the human brain. Idiosyncratic anatomy of individual brains and heads leads to significant variability in such current flows across subjects, thus, necessitating accurate individualized head models. Here we report on an automated processing chain that computes current distributions in the head starting from a structural magnetic resonance image (MRI). The main purpose of automating this process is to reduce the substantial effort currently required for manual segmentation, electrode placement, and solving of finite element models. In doing so, several weeks of manual labor were reduced to no more than 4 hours of computation time and minimal user interaction, while current-flow results for the automated method deviated by less than 27.9% from the manual method. Key facilitating factors are the addition of three tissue types (skull, scalp and air) to a state-of-the-art automated segmentation process, morphological processing to correct small but important segmentation errors, and automated placement of small electrodes based on easily reproducible standard electrode configurations. We anticipate that such an automated processing will become an indispensable tool to individualize transcranial direct current stimulation (tDCS) therapy. PMID:23367144
NASA Astrophysics Data System (ADS)
Abellán-Nebot, J. V.; Liu, J.; Romero, F.
2009-11-01
The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.
NASA Astrophysics Data System (ADS)
Okawa, Tsutomu; Kaminishi, Tsukasa; Kojima, Yoshiyuki; Hirabayashi, Syuichi; Koizumi, Hisao
Business process modeling (BPM) is gaining attention as a measure of analysis and improvement of the business process. BPM analyses the current business process as an AS-IS model and solves problems to improve the current business and moreover it aims to create a business process, which produces values, as a TO-BE model. However, researches of techniques that connect the business process improvement acquired by BPM to the implementation of the information system seamlessly are rarely reported. If the business model obtained by BPM is converted into UML, and the implementation can be carried out by the technique of UML, we can expect the improvement in efficiency of information system implementation. In this paper, we describe a method of the system development, which converts the process model obtained by BPM into UML and the method is evaluated by modeling a prototype of a parts procurement system. In the evaluation, comparison with the case where the system is implemented by the conventional UML technique without going via BPM is performed.
NASA Astrophysics Data System (ADS)
Wang, Fuliang; Zhao, Zhipeng; Wang, Feng; Wang, Yan; Nie, Nantian
2017-12-01
Through-silicon via (TSV) filling by electrochemical deposition is still a challenge for 3D IC packaging, and three-component additive systems (accelerator, suppressor, and leveler) were commonly used in the industry to achieve void-free filling. However, models considering three additive systems and the current density effect have not been fully studied. In this paper, a novel three-component model was developed to study the TSV filling mechanism and process, where the interaction behavior of the three additives (accelerator, suppressor, and leveler) were considered, and the adsorption, desorption, and consumption coefficient of the three additives were changed with the current density. Based on this new model, the three filling types (seam void, ‘V’ shape, and key hole) were simulated under different current density conditions, and the filling results were verified by experiments. The effect of the current density on the copper ion concentration, additives surface coverage, and local current density distribution during the TSV filling process were obtained. Based on the simulation and experimental results, the diffusion-adsorption-desorption-consumption competition behavior between the suppressor, the accelerator, and the leveler were discussed. The filling mechanisms under different current densities were also analyzed.
Analysis and modeling of leakage current sensor under pulsating direct current
NASA Astrophysics Data System (ADS)
Li, Kui; Dai, Yihua; Wang, Yao; Niu, Feng; Chen, Zhao; Huang, Shaopo
2017-05-01
In this paper, the transformation characteristics of current sensor under pulsating DC leakage current is investigated. The mathematical model of current sensor is proposed to accurately describe the secondary side current and excitation current. The transformation process of current sensor is illustrated in details and the transformation error is analyzed from multi aspects. A simulation model is built and a sensor prototype is designed to conduct comparative evaluation, and both simulation and experimental results are presented to verify the correctness of theoretical analysis.
Relationships between Lexical Processing Speed, Language Skills, and Autistic Traits in Children
ERIC Educational Resources Information Center
Abrigo, Erin
2012-01-01
According to current models of spoken word recognition listeners understand speech as it unfolds over time. Eye tracking provides a non-invasive, on-line method to monitor attention, providing insight into the processing of spoken language. In the current project a spoken lexical processing assessment (LPA) confirmed current theories of spoken…
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2003-01-01
A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.
A delta-rule model of numerical and non-numerical order processing.
Verguts, Tom; Van Opstal, Filip
2014-06-01
Numerical and non-numerical order processing share empirical characteristics (distance effect and semantic congruity), but there are also important differences (in size effect and end effect). At the same time, models and theories of numerical and non-numerical order processing developed largely separately. Currently, we combine insights from 2 earlier models to integrate them in a common framework. We argue that the same learning principle underlies numerical and non-numerical orders, but that environmental features determine the empirical differences. Implications for current theories on order processing are pointed out. PsycINFO Database Record (c) 2014 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Hoang, M.-Q.; Le Roy, S.; Boudou, L.; Teyssedre, G.
2016-06-01
One of the difficulties in unravelling transport processes in electrically insulating materials is the fact that the response, notably charging current transients, can have mixed contributions from orientation polarization and from space charge processes. This work aims at identifying and characterizing the polarization processes in a polar polymer in the time and frequency-domains and to implement the contribution of the polarization into a charge transport model. To do so, Alternate Polarization Current (APC) and Dielectric Spectroscopy measurements have been performed on poly(ethylene naphthalene 2,6-dicarboxylate) (PEN), an aromatic polar polymer, providing information on polarization mechanisms in the time- and frequency-domain, respectively. In the frequency-domain, PEN exhibits 3 relaxation processes termed β, β* (sub-glass transitions), and α relaxations (glass transition) in increasing order of temperature. Conduction was also detected at high temperatures. Dielectric responses were treated using a simplified version of the Havriliak-Negami model (Cole-Cole (CC) model), using 3 parameters per relaxation process, these parameters being temperature dependent. The time dependent polarization obtained from the CC model is then added to a charge transport model. Simulated currents issued from the transport model implemented with the polarization are compared with the measured APCs, showing a good consistency between experiments and simulations in a situation where the response comes essentially from dipolar processes.
Effect of double layers on magnetosphere-ionosphere coupling
NASA Technical Reports Server (NTRS)
Lysak, Robert L.; Hudson, Mary K.
1987-01-01
The Earth's auroral zone contains dynamic processes occurring on scales from the length of an auroral zone field line which characterizes Alfven wave propagation to the scale of microscopic processes which occur over a few Debye lengths. These processes interact in a time-dependent fashion since the current carried by the Alfven waves can excite microscopic turbulence which can in turn provide dissipation of the Alfven wave energy. This review will first describe the dynamic aspects of auroral current structures with emphasis on consequences for models of microscopic turbulence. A number of models of microscopic turbulence will be introduced into a large-scale model of Alfven wave propagation to determine the effect of various models on the overall structure of auroral currents. In particular, the effects of a double layer electric field which scales with the plasma temperature and Debye length is compared with the effect of anomalous resistivity due to electrostatic ion cyclotron turbulence in which the electric field scales with the magnetic field strength. It is found that the double layer model is less diffusive than in the resistive model leading to the possibility of narrow, intense current structures.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Leifker, Daniel B.
1991-01-01
Current qualitative device and process models represent only the structure and behavior of physical systems. However, systems in the real world include goal-oriented activities that generally cannot be easily represented using current modeling techniques. An extension of a qualitative modeling system, known as functional modeling, which captures goal-oriented activities explicitly is proposed and how they may be used to support intelligent automation and fault management is shown.
Brugga basin's TACD Model Adaptation to current GIS PCRaster 4.1
NASA Astrophysics Data System (ADS)
Lopez Rozo, Nicolas Antonio; Corzo Perez, Gerald Augusto; Santos Granados, Germán Ricardo
2017-04-01
The process-oriented catchment model TACD (Tracer-Aided Catchment model - Distributed) was developed in the Brugga Basin (Dark Forest, Germany) with a modular structure in the Geographic Information System PCRaster Version 2, in order to dynamically model the natural processes of a complex Basin, such as rainfall, air temperature, solar radiation, evapotranspiration and flow routing among others. Further research and application on this model has been done, such as adapting other meso-scaled basins and adding erosion processes in the hydrological model. However, TACD model is computationally intensive. This has made it not efficient on large and well discretized river basins. Aswell, the current version is not compatible with latest PCRaster Version 4.1, which offers new capabilities on 64-bit hardware architecture, hydraulic calculation improvements, in maps creation, some error and bug fixes. The current work studied and adapted TACD model into the latest GIS PCRaster Version 4.1. This was done by editing the original scripts, replacing deprecated functionalities without losing correctness of the TACD model. The correctness of the adapted TACD model was verified by using the original study case of the Brugga Basin and comparing the adapted model results with the original model results by Stefan Roser in 2001. Small differences were found due to the fact that some hydraulic and hydrological routines were optimized since version 2 of GIS PCRaster. Therefore, the hydraulic and hydrological processes are well represented. With this new working model, further research and development on current topics like uncertainty analysis, GCM downscaling techniques and spatio-temporal modelling are encouraged.
Modeling and simulation: A key to future defense technology
NASA Technical Reports Server (NTRS)
Muccio, Anthony B.
1993-01-01
The purpose of this paper is to express the rationale for continued technological and scientific development of the modeling and simulation process for the defense industry. The defense industry, along with a variety of other industries, is currently being forced into making sacrifices in response to the current economic hardships. These sacrifices, which may not compromise the safety of our nation, nor jeopardize our current standing as the world peace officer, must be concentrated in areas which will withstand the needs of the changing world. Therefore, the need for cost effective alternatives of defense issues must be examined. This paper provides support that the modeling and simulation process is an economically feasible process which will ensure our nation's safety as well as provide and keep up with the future technological developments and demands required by the defense industry. The outline of this paper is as follows: introduction, which defines and describes the modeling and simulation process; discussion, which details the purpose and benefits of modeling and simulation and provides specific examples of how the process has been successful; and conclusion, which summarizes the specifics of modeling and simulation of defense issues and lends the support for its continued use in the defense arena.
Electrophysiological models of neural processing.
Nelson, Mark E
2011-01-01
The brain is an amazing information processing system that allows organisms to adaptively monitor and control complex dynamic interactions with their environment across multiple spatial and temporal scales. Mathematical modeling and computer simulation techniques have become essential tools in understanding diverse aspects of neural processing ranging from sub-millisecond temporal coding in the sound localization circuity of barn owls to long-term memory storage and retrieval in humans that can span decades. The processing capabilities of individual neurons lie at the core of these models, with the emphasis shifting upward and downward across different levels of biological organization depending on the nature of the questions being addressed. This review provides an introduction to the techniques for constructing biophysically based models of individual neurons and local networks. Topics include Hodgkin-Huxley-type models of macroscopic membrane currents, Markov models of individual ion-channel currents, compartmental models of neuronal morphology, and network models involving synaptic interactions among multiple neurons.
Autoplan: A self-processing network model for an extended blocks world planning environment
NASA Technical Reports Server (NTRS)
Dautrechy, C. Lynne; Reggia, James A.; Mcfadden, Frank
1990-01-01
Self-processing network models (neural/connectionist models, marker passing/message passing networks, etc.) are currently undergoing intense investigation for a variety of information processing applications. These models are potentially very powerful in that they support a large amount of explicit parallel processing, and they cleanly integrate high level and low level information processing. However they are currently limited by a lack of understanding of how to apply them effectively in many application areas. The formulation of self-processing network methods for dynamic, reactive planning is studied. The long-term goal is to formulate robust, computationally effective information processing methods for the distributed control of semiautonomous exploration systems, e.g., the Mars Rover. The current research effort is focusing on hierarchical plan generation, execution and revision through local operations in an extended blocks world environment. This scenario involves many challenging features that would be encountered in a real planning and control environment: multiple simultaneous goals, parallel as well as sequential action execution, action sequencing determined not only by goals and their interactions but also by limited resources (e.g., three tasks, two acting agents), need to interpret unanticipated events and react appropriately through replanning, etc.
Modeling of outpatient prescribing process in iran: a gateway toward electronic prescribing system.
Ahmadi, Maryam; Samadbeik, Mahnaz; Sadoughi, Farahnaz
2014-01-01
Implementation of electronic prescribing system can overcome many problems of the paper prescribing system, and provide numerous opportunities of more effective and advantageous prescribing. Successful implementation of such a system requires complete and deep understanding of work content, human force, and workflow of paper prescribing. The current study was designed in order to model the current business process of outpatient prescribing in Iran and clarify different actions during this process. In order to describe the prescribing process and the system features in Iran, the methodology of business process modeling and analysis was used in the present study. The results of the process documentation were analyzed using a conceptual model of workflow elements and the technique of modeling "As-Is" business processes. Analysis of the current (as-is) prescribing process demonstrated that Iran stood at the first levels of sophistication in graduated levels of electronic prescribing, namely electronic prescription reference, and that there were problematic areas including bottlenecks, redundant and duplicated work, concentration of decision nodes, and communicative weaknesses among stakeholders of the process. Using information technology in some activities of medication prescription in Iran has not eliminated the dependence of the stakeholders on paper-based documents and prescriptions. Therefore, it is necessary to implement proper system programming in order to support change management and solve the problems in the existing prescribing process. To this end, a suitable basis should be provided for reorganization and improvement of the prescribing process for the future electronic systems.
Processing (Non)Compositional Expressions: Mistakes and Recovery
ERIC Educational Resources Information Center
Holsinger, Edward; Kaiser, Elsi
2013-01-01
Current models of idiom representation and processing differ with respect to the role of literal processing during the interpretation of idiomatic expressions. Word-like models (Bobrow & Bell, 1973; Swinney & Cutler, 1979) propose that idiomatic meaning can be accessed directly, whereas structural models (Cacciari & Tabossi, 1988;…
Information Processing: A Review of Implications of Johnstone's Model for Science Education
ERIC Educational Resources Information Center
St Clair-Thompson, Helen; Overton, Tina; Botton, Chris
2010-01-01
The current review is concerned with an information processing model used in science education. The purpose is to summarise the current theoretical understanding, in published research, of a number of factors that are known to influence learning and achievement. These include field independence, working memory, long-term memory, and the use of…
[The dual process model of addiction. Towards an integrated model?].
Vandermeeren, R; Hebbrecht, M
2012-01-01
Neurobiology and cognitive psychology have provided us with a dual process model of addiction. According to this model, behavior is considered to be the dynamic result of a combination of automatic and controlling processes. In cases of addiction the balance between these two processes is severely disturbed. Automated processes will continue to produce impulses that ensure the continuance of addictive behavior. Weak, reflective or controlling processes are both the reason for and the result of the inability to forgo addiction. To identify features that are common to current neurocognitive insights into addiction and psychodynamic views on addiction. The picture that emerges from research is not clear. There is some evidence that attentional bias has a causal effect on addiction. There is no evidence that automatic associations have a causal effect, but there is some evidence that automatic action-tendencies do have a causal effect. Current neurocognitive views on the dual process model of addiction can be integrated with an evidence-based approach to addiction and with psychodynamic views on addiction.
Computer Models of Personality: Implications for Measurement
ERIC Educational Resources Information Center
Cranton, P. A.
1976-01-01
Current research on computer models of personality is reviewed and categorized under five headings: (1) models of belief systems; (2) models of interpersonal behavior; (3) models of decision-making processes; (4) prediction models; and (5) theory-based simulations of specific processes. The use of computer models in personality measurement is…
Louis R. Iverson; Frank R. Thompson; Stephen Matthews; Matthew Peters; Anantha Prasad; William D. Dijak; Jacob Fraser; Wen J. Wang; Brice Hanberry; Hong He; Maria Janowiak; Patricia Butler; Leslie Brandt; Chris Swanston
2016-01-01
Context. Species distribution models (SDM) establish statistical relationships between the current distribution of species and key attributes whereas process-based models simulate ecosystem and tree species dynamics based on representations of physical and biological processes. TreeAtlas, which uses DISTRIB SDM, and Linkages and LANDIS PRO, process...
ARM - Midlatitude Continental Convective Clouds
Jensen, Mike; Bartholomew, Mary Jane; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos
2012-01-19
Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.
ARM - Midlatitude Continental Convective Clouds (comstock-hvps)
Jensen, Mike; Comstock, Jennifer; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos
2012-01-06
Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.
Macro Level Simulation Model Of Space Shuttle Processing
NASA Technical Reports Server (NTRS)
2000-01-01
The contents include: 1) Space Shuttle Processing Simulation Model; 2) Knowledge Acquisition; 3) Simulation Input Analysis; 4) Model Applications in Current Shuttle Environment; and 5) Model Applications for Future Reusable Launch Vehicles (RLV's). This paper is presented in viewgraph form.
Experimental investigation and model verification for a GAX absorber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, S.C.; Christensen, R.N.
1996-12-31
In the ammonia-water generator-absorber heat exchange (GAX) absorption heat pump, the heat and mass transfer processes which occur between the generator and absorber are the most crucial in assuring that the heat pump will achieve COPs competitive with those of current technologies. In this study, a model is developed for the heat and mass transfer processes that occur in a counter-current vertical fluted tube absorber (VFTA) with inserts. Correlations for heat and mass transfer in annuli are used to model the processes in the VTA. Experimental data is used to validate the model for three different insert geometries. Comparison ofmore » model results with experimental data provides insight into model corrections necessary to bring the model into agreement with the physical phenomena observed in the laboratory.« less
Industrial process surveillance system
Gross, Kenneth C.; Wegerich, Stephan W.; Singer, Ralph M.; Mott, Jack E.
1998-01-01
A system and method for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy.
Industrial process surveillance system
Gross, K.C.; Wegerich, S.W.; Singer, R.M.; Mott, J.E.
1998-06-09
A system and method are disclosed for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy. 96 figs.
Industrial Process Surveillance System
Gross, Kenneth C.; Wegerich, Stephan W; Singer, Ralph M.; Mott, Jack E.
2001-01-30
A system and method for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy.
ERIC Educational Resources Information Center
Stahl, Robert J.
This review of the current status of the human information processing model presents the Stahl Perceptual Information Processing and Operations Model (SPInPrOM) as a model of how thinking, memory, and the processing of information take place within the individual learner. A related system, the Domain of Cognition, is presented as an alternative to…
NASA Technical Reports Server (NTRS)
Adrian, M. L.; Gallagher, D. L.; Khazanov, G. V.; Chsang, S. W.; Liemohn, M. W.; Perez, J. D.; Green, J. L.; Sandel, B. R.; Mitchell, D. G.; Mende, S. B.;
2002-01-01
During a geomagnetic storm on 24 May 2000, the IMAGE Extreme Ultraviolet (EUV) camera observed a plasmaspheric density trough in the evening sector at L-values inside the plasmapause. Forward modeling of this feature has indicated that plasmaspheric densities beyond the outer wall of the trough are well below model expectations. This diminished plasma condition suggests the presence of an erosion process due to the interaction of the plasmasphere with ring current plasmas. We present an overview of EUV, energetic neutral atom (ENA), and Far Ultraviolet (FUV) camera observations associated with the plasmaspheric density trough of 24 May 2000, as well as forward modeling evidence of the lie existence of a plasmaspheric erosion process during this period. FUV proton aurora image analysis, convolution of ENA observations, and ring current modeling are then presented in an effort to associate the observed erosion with coupling between the plasmasphere and ring-current plasmas.
EVALUATION OF PHYSIOLOGY COMPUTER MODELS, AND THE FEASIBILITY OF THEIR USE IN RISK ASSESSMENT.
This project will evaluate the current state of quantitative models that simulate physiological processes, and the how these models might be used in conjunction with the current use of PBPK and BBDR models in risk assessment. The work will include a literature search to identify...
Testing Signal-Detection Models of Yes/No and Two-Alternative Forced-Choice Recognition Memory
ERIC Educational Resources Information Center
Jang, Yoonhee; Wixted, John T.; Huber, David E.
2009-01-01
The current study compared 3 models of recognition memory in their ability to generalize across yes/no and 2-alternative forced-choice (2AFC) testing. The unequal-variance signal-detection model assumes a continuous memory strength process. The dual-process signal-detection model adds a thresholdlike recollection process to a continuous…
Observational support for the current sheet catastrophe model of substorm current disruption
NASA Technical Reports Server (NTRS)
Burkhart, G. R.; Lopez, R. E.; Dusenbery, P. B.; Speiser, T. W.
1992-01-01
The principles of the current sheet catastrophe models are briefly reviewed, and observations of some of the signatures predicted by the theory are presented. The data considered here include AMPTE/CCE observations of fifteen current sheet disruption events. According to the model proposed here, the root cause of the current disruption is some process, as yet unknown, that leads to an increase in the k sub A parameter. Possible causes for the increase in k sub A are discussed.
Wind and Wave Driven Nearshore Circulation at Cape Hatteras Point
NASA Astrophysics Data System (ADS)
Kumar, N.; Voulgaris, G.; Warner, J. C.; List, J. H.
2012-12-01
We have used a measurement and modeling approach to identify hydrodynamic processes responsible for alongshore transport of sediment that can support the maintenance of Diamond Shoals, NC, a large inner-shelf sedimentary convergent feature. As a part of Carolina Coastal Change Processes project, a one month field experiment was conducted around Cape Hatteras point during February, 2010. The instrumentation consisted of 15 acoustic current meters (measuring pressure and velocity profile) deployed in water depths varying from 3-10m and a very high frequency (VHF) beam forming radar system providing surface waves and currents with a resolution of 150 m and a spatial coverage of 10-15 km2. Analysis of field observation suggests that wind-driven circulation and littoral current dominate surf zone and inner shelf processes at least at an order higher than tidally rectified flows. However, the data analysis identified that relevant processes like non-linear advective acceleration, pressure gradient and vortex-force (due to interaction between wave-induced drift and mean flow vorticity), may be significant, but were not assessed accurately due to instrument location and accuracy. To obtain a deeper physical understanding of the hydrodynamics in this study-site, we applied a three-dimensional Coupled-Ocean-Atmosphere-Wave_Sediment-Transport (COAWST) numerical model. The COAWST modeling system is comprised of nested, coupled, three-dimensional ocean-circulation model (ROMS) and wave propagation model (SWAN), configured for the study site to simulate wave height, direction, period and mean current velocities (both Eulerian and Lagrangian). The nesting follows a two-way grid refinement process for the circulation module, and one-way for the wave model. The coarsest parent grid resolved processes on the spatial and temporal scales of mid-shelf to inner-shelf, and subsequent child grids evolved at inner-shelf and surf zone scales. Preliminary results show that the model successfully reproduces wind-driven circulation and littoral currents. Furthermore, model simulation provides evidence for (a) circulation pattern suggesting a mechanism for sediment movement from littoral zone to the Diamond Shoals complex; (b) Diamond shoals complex acting as independent coastline, which restricts the littoral currents to follow the coastline orientation around Cape Hatteras point. As a part of this study, simulated hydrodynamic parameters will be validated against field observations of wave height and direction and Eulerian velocities from acoustic current meters, and sea surface maps of wave height and Lagrangian flows provided by the VHF radar. Moreover, the model results will be analyzed to (a) identify the significance of the terms in momentum balance which are not estimated accurately through field observations; (b) provide a quasi-quantitative estimate of sediment transport contributing to shoal building process.
Meson exchange current (MEC) models in neutrino interaction generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katori, Teppei
2015-05-15
Understanding of the so-called 2 particle-2 hole (2p-2h) effect is an urgent program in neutrino interaction physics for current and future oscillation experiments. Such processes are believed to be responsible for the event excesses observed by recent neutrino experiments. The 2p-2h effect is dominated by the meson exchange current (MEC), and is accompanied by a 2-nucleon emission from the primary vertex, instead of a single nucleon emission from the charged-current quasi-elastic (CCQE) interaction. Current and future high resolution experiments can potentially nail down this effect. For this reason, there are world wide efforts to model and implement this process inmore » neutrino interaction simulations. In these proceedings, I would like to describe how this channel is modeled in neutrino interaction generators.« less
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing
2012-12-14
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia Tathagata Das Haoyuan Li Timothy Hunter Scott Shenker Ion...SUBTITLE Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of
NASA Astrophysics Data System (ADS)
Kumar, T. Senthil; Balasubramanian, V.; Babu, S.; Sanavullah, M. Y.
2007-08-01
AA6061 aluminium alloy (Al-Mg-Si alloy) has gathered wide acceptance in the fabrication of food processing equipment, chemical containers, passenger cars, road tankers, and railway transport systems. The preferred process for welding these aluminium alloys is frequently Gas Tungsten Arc (GTA) welding due to its comparatively easy applicability and lower cost. In the case of single pass GTA welding of thinner sections of this alloy, the pulsed current has been found beneficial due to its advantages over the conventional continuous current processes. The use of pulsed current parameters has been found to improve the mechanical properties of the welds compared to those of continuous current welds of this alloy due to grain refinement occurring in the fusion zone. In this investigation, an attempt has been made to develop a mathematical model to predict the fusion zone grain diameter incorporating pulsed current welding parameters. Statistical tools such as design of experiments, analysis of variance, and regression analysis are used to develop the mathematical model. The developed model can be effectively used to predict the fusion grain diameter at a 95% confidence level for the given pulsed current parameters. The effect of pulsed current GTA welding parameters on the fusion zone grain diameter of AA 6061 aluminium alloy welds is reported in this paper.
NASA Technical Reports Server (NTRS)
Metschan, Stephen L.; Wilden, Kurtis S.; Sharpless, Garrett C.; Andelman, Rich M.
1993-01-01
Textile manufacturing processes offer potential cost and weight advantages over traditional composite materials and processes for transport fuselage elements. In the current study, design cost modeling relationships between textile processes and element design details were developed. Such relationships are expected to help future aircraft designers to make timely decisions on the effect of design details and overall configurations on textile fabrication costs. The fundamental advantage of a design cost model is to insure that the element design is cost effective for the intended process. Trade studies on the effects of processing parameters also help to optimize the manufacturing steps for a particular structural element. Two methods of analyzing design detail/process cost relationships developed for the design cost model were pursued in the current study. The first makes use of existing databases and alternative cost modeling methods (e.g. detailed estimating). The second compares design cost model predictions with data collected during the fabrication of seven foot circumferential frames for ATCAS crown test panels. The process used in this case involves 2D dry braiding and resin transfer molding of curved 'J' cross section frame members having design details characteristic of the baseline ATCAS crown design.
Modeling Post-Accident Vehicle Egress
2013-01-01
interest for military situations may involve rolled-over vehicles for which detailed movement data are not available. In the current design process...test trials. These evaluations are expensive and time-consuming, and are often performed late in the design process when it is too difficult to...alter the design if weaknesses are discovered. Yet, due to the limitations of current software tools, digital human models (DHMs) are not yet widely
NASA Astrophysics Data System (ADS)
Liu, Lei; Chen, Hongde; Zhong, Yijiang; Wang, Jun; Xu, Changgui; Chen, Anqing; Du, Xiaofeng
2017-10-01
Sediment gravity flow deposits are common, particularly in sandy formations, but their origin has been a matter of debate and there is no consensus about the classification of such deposits. However, sediment gravity flow sandstones are economically important and have the potential to meet a growing demand in oil and gas exploration, so there is a drive to better understand them. This study focuses on sediment gravity flow deposits identified from well cores in Palaeogene deposits from the Liaodong Bay Depression in Bohai Bay Basin, China. We classify the sediment gravity flow deposits into eight lithofacies using lithological characteristics, grain size, and sedimentary structures, and interpret the associated depositional processes. Based on the scale, spatial distribution, and contact relationships of sediment gravity flow deposits, we defined six types of lithofacies associations (LAs) that reflect transformation processes and depositional morphology: LA1 (unconfined proximal breccia deposits), LA2 (confined channel deposits), LA3 (braided-channel lobe deposits), LA4 (unconfined lobe deposits), LA5 (distal sheet deposits), and LA6 (non-channelized sheet deposits). Finally, we established three depositional models that reflect the sedimentological characteristics and depositional processes of sediment gravity flow deposits: (1) slope-apron gravel-rich depositional model, which involves cohesive debris flows deposited as LA1 and dilute turbidity currents deposited as LA5; (2) non-channelized surge-like turbidity current depositional model, which mainly comprises sandy slumping, suspended load dominated turbidity currents, and dilute turbidity currents deposited as LA5 and LA6; and (3) channelized subaqueous-fan depositional model, which consists of non-cohesive bedload dominated turbidity currents, suspended load dominated turbidity currents, and dilute turbidity currents deposited as LA2-LA5, originating from sustained extrabasinal turbidity currents (hyperpycnal flow). The depositional models may be applicable to oil and gas exploration and production from sediment gravity flow systems in similar lacustrine depositional environments elsewhere.
Studies and comparison of currently utilized models for ablation in Electrothermal-chemical guns
NASA Astrophysics Data System (ADS)
Jia, Shenli; Li, Rui; Li, Xingwen
2009-10-01
Wall ablation is a key process taking place in the capillary plasma generator in Electrothermal-Chemical (ETC) guns, whose characteristic directly decides the generator's performance. In the present article, this ablation process is theoretically studied. Currently widely used mathematical models designed to describe such process are analyzed and compared, including a recently developed kinetic model which takes into account the unsteady state in plasma-wall transition region by dividing it into two sub-layers, a Knudsen layer and a collision dominated non-equilibrium Hydrodynamic layer, a model based on Langmuir Law, as well as a simplified model widely used in arc-wall interaction process in circuit breakers, which assumes a proportional factor and an ablation enthalpy obtained empirically. Bulk plasma state and parameters are assumed to be consistent while analyzing and comparing each model, in order to take into consideration only the difference caused by model itself. Finally ablation rate is calculated in each method respectively and differences are discussed.
NASA Astrophysics Data System (ADS)
Shen, Yan; Ge, Jin-ming; Zhang, Guo-qing; Yu, Wen-bin; Liu, Rui-tong; Fan, Wei; Yang, Ying-xuan
2018-01-01
This paper explores the problem of signal processing in optical current transformers (OCTs). Based on the noise characteristics of OCTs, such as overlapping signals, noise frequency bands, low signal-to-noise ratios, and difficulties in acquiring statistical features of noise power, an improved standard Kalman filtering algorithm was proposed for direct current (DC) signal processing. The state-space model of the OCT DC measurement system is first established, and then mixed noise can be processed by adding mixed noise into measurement and state parameters. According to the minimum mean squared error criterion, state predictions and update equations of the improved Kalman algorithm could be deduced based on the established model. An improved central difference Kalman filter was proposed for alternating current (AC) signal processing, which improved the sampling strategy and noise processing of colored noise. Real-time estimation and correction of noise were achieved by designing AC and DC noise recursive filters. Experimental results show that the improved signal processing algorithms had a good filtering effect on the AC and DC signals with mixed noise of OCT. Furthermore, the proposed algorithm was able to achieve real-time correction of noise during the OCT filtering process.
Ireland, Jane L; Adams, Christine
2015-01-01
The current study explores associations between implicit and explicit aggression in young adult male prisoners, seeking to apply the Reflection-Impulsive Model and indicate parity with elements of the General Aggression Model and social cognition. Implicit cognitive aggressive processing is not an area that has been examined among prisoners. Two hundred and sixty two prisoners completed an implicit cognitive aggression measure (Puzzle Test) and explicit aggression measures, covering current behaviour (DIPC-R) and aggression disposition (AQ). It was predicted that dispositional aggression would be predicted by implicit cognitive aggression, and that implicit cognitive aggression would predict current engagement in aggressive behaviour. It was also predicted that more impulsive implicit cognitive processing would associate with aggressive behaviour whereas cognitively effortful implicit cognitive processing would not. Implicit aggressive cognitive processing was associated with increased dispositional aggression but not current reports of aggressive behaviour. Impulsive implicit cognitive processing of an aggressive nature predicted increased dispositional aggression whereas more cognitively effortful implicit cognitive aggression did not. The article concludes by outlining the importance of accounting for implicit cognitive processing among prisoners and the need to separate such processing into facets (i.e. impulsive vs. cognitively effortful). Implications for future research and practice in this novel area of study are indicated. Copyright © 2015 Elsevier Ltd. All rights reserved.
Electron beam generation in the turbulent plasma of Z-pinch discharges
NASA Astrophysics Data System (ADS)
Vikhrev, Victor V.; Baronova, Elena O.
1997-05-01
Numerical modeling of the process of electron beam generation in z-pinch discharges are presented. The proposed model represents the electron beam generation under turbulent plasma conditions. Strong current distribution inhomogeneity in the plasma column has been accounted for the adequate generation process investigation. Electron beam is generated near the maximum of compression due to run away mechanism and it is not related with the current break effect.
Updates on Modeling the Water Cycle with the NASA Ames Mars Global Climate Model
NASA Technical Reports Server (NTRS)
Kahre, M. A.; Haberle, R. M.; Hollingsworth, J. L.; Montmessin, F.; Brecht, A. S.; Urata, R.; Klassen, D. R.; Wolff, M. J.
2017-01-01
Global Circulation Models (GCMs) have made steady progress in simulating the current Mars water cycle. It is now widely recognized that clouds are a critical component that can significantly affect the nature of the simulated water cycle. Two processes in particular are key to implementing clouds in a GCM: the microphysical processes of formation and dissipation, and their radiative effects on heating/ cooling rates. Together, these processes alter the thermal structure, change the dynamics, and regulate inter-hemispheric transport. We have made considerable progress representing these processes in the NASA Ames GCM, particularly in the presence of radiatively active water ice clouds. We present the current state of our group's water cycle modeling efforts, show results from selected simulations, highlight some of the issues, and discuss avenues for further investigation.
ERIC Educational Resources Information Center
Brysbaert, Marc; Duyck, Wouter
2010-01-01
The Revised Hierarchical Model (RHM) of bilingual language processing dominates current thinking on bilingual language processing. Recently, basic tenets of the model have been called into question. First, there is little evidence for separate lexicons. Second, there is little evidence for language selective access. Third, the inclusion of…
Sleep and memory in the making. Are current concepts sufficient in children?
Peigneux, P
2014-01-01
Memory consolidation is an active process wired in brain plasticity. How plasticity mechanisms develop and are modulated after learning is at the core of current models of sleep-dependent memory consolidation. Nowadays, two main classes of sleep-related memory consolidation theories are proposed, namely system consolidation and synaptic homeostasis. However, novel models of consolidation emerge, that might better account for the highly dynamic and interactive processes of encoding and memory consolidation. Processing steps can take place at various temporal phases and be modulated by interactions with prior experiences and ongoing events. In this perspective, sleep might support (or not) memory consolidation processes under specific neurophysiological and environmental circumstances leading to enduring representations in long-term memory stores. We outline here a discussion about how current and emergent models account for the complexity and apparent inconsistency of empirical data. Additionally, models aimed at understanding neurophysiological and/or cognitive processes should not only provide a satisfactory explanation for the phenomena at stake, but also account for their ontogeny and the conditions that disrupt their organisation. Looking at the available literature, this developmental condition appears to remain unfulfilled when trying to understand the relationships between sleep, learning and memory consolidation processes, both in healthy children and in children with pathological conditions.
Carmena, Jose M.
2016-01-01
Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820
Watershed Simulation of Nutrient Processes
In this presentation, nitrogen processes simulated in watershed models were reviewed and compared. Furthermore, current researches on nitrogen losses from agricultural fields were also reviewed. Finally, applications with those models were reviewed and selected successful and u...
State of the art in pathology business process analysis, modeling, design and optimization.
Schrader, Thomas; Blobel, Bernd; García-Rojo, Marcial; Daniel, Christel; Słodkowska, Janina
2012-01-01
For analyzing current workflows and processes, for improving them, for quality management and quality assurance, for integrating hardware and software components, but also for education, training and communication between different domains' experts, modeling business process in a pathology department is inevitable. The authors highlight three main processes in pathology: general diagnostic, cytology diagnostic, and autopsy. In this chapter, those processes are formally modeled and described in detail. Finally, specialized processes such as immunohistochemistry and frozen section have been considered.
Physical Fitness and the Stress Process
ERIC Educational Resources Information Center
Ensel, Walter M.; Lin, Nan
2004-01-01
In the current paper we focus on the role of physical fitness in the life stress process for both psychological and physical well-being. The major research question posed in the current study is: Does physical fitness deter distress in a model containing the major components of the life stress process? That is, do individuals who exercise show…
Fully Burdened Cost of Fuel Using Input-Output Analysis
2011-12-01
Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single step, allowing for less complex and...wide extension of the Bulk Fuels Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single...ABBREVIATIONS AEM Atlantic, Europe, and the Mediterranean AOAs Analysis of Alternatives DAG Defense Acquisition Guidebook DAU Defense Acquisition University
Nonrational Processes in Ethical Decision Making
ERIC Educational Resources Information Center
Rogerson, Mark D.; Gottlieb, Michael C.; Handelsman, Mitchell M.; Knapp, Samuel; Younggren, Jeffrey
2011-01-01
Most current ethical decision-making models provide a logical and reasoned process for making ethical judgments, but these models are empirically unproven and rely upon assumptions of rational, conscious, and quasi-legal reasoning. Such models predominate despite the fact that many nonrational factors influence ethical thought and behavior,…
NASA Astrophysics Data System (ADS)
Ilhan, Z.; Wehner, W. P.; Schuster, E.; Boyer, M. D.; Gates, D. A.; Gerhardt, S.; Menard, J.
2015-11-01
Active control of the toroidal current density profile is crucial to achieve and maintain high-performance, MHD-stable plasma operation in NSTX-U. A first-principles-driven, control-oriented model describing the temporal evolution of the current profile has been proposed earlier by combining the magnetic diffusion equation with empirical correlations obtained at NSTX-U for the electron density, electron temperature, and non-inductive current drives. A feedforward + feedback control scheme for the requlation of the current profile is constructed by embedding the proposed nonlinear, physics-based model into the control design process. Firstly, nonlinear optimization techniques are used to design feedforward actuator trajectories that steer the plasma to a desired operating state with the objective of supporting the traditional trial-and-error experimental process of advanced scenario planning. Secondly, a feedback control algorithm to track a desired current profile evolution is developed with the goal of adding robustness to the overall control scheme. The effectiveness of the combined feedforward + feedback control algorithm for current profile regulation is tested in predictive simulations carried out in TRANSP. Supported by PPPL.
NARSTO critical review of photochemical models and modeling
NASA Astrophysics Data System (ADS)
Russell, Armistead; Dennis, Robin
Photochemical air quality models play a central role in both schentific investigation of how pollutants evlove in the atmosphere as well as developing policies to manage air quality. In the past 30 years, these models have evolved from rather crude representations of the physics and chemistry impacting trace species to their current state: comprehensive, but not complete. The evolution has included advancements in not only the level of process descriptions, but also the computational implementation, including numerical methods. As part of the NARSTO Critical Reviews, this article discusses the current strengths and weaknesses of air quality models and the modeling process. Current Eulerian models are found to represent well the primary processes impacting the evolution of trace species in most cases though some exceptions may exist. For example, sub-grid-scale processes, such as concentrated power plant plumes, are treated only approximately. It is not apparent how much such approximations affect their results and the polices based upon those results. A significant weakness has been in how investigators have addressed, and communicated, such uncertainties. Studies find that major uncertainties are due to model inputs, e.g., emissions and meteorology, more so than the model itself. One of the primary weakness identified is in the modeling process, not the models. Evaluation has been limited both due to data constraints. Seldom is there ample observational data to conduct a detailed model intercomparison using consistent data (e.g., the same emissions and meteorology). Further model advancement, and development of greater confidence in the use of models, is hampered by the lack of thorough evaluation and intercomparisons. Model advances are seen in the use of new tools for extending the interpretation of model results, e.g., process and sensitivity analysis, modeling systems to facilitate their use, and extension of model capabilities, e.g., aerosol dynamics capabilities and sub-grid-scale representations. Another possible direction that is the development and widespread use of a community model acting as a platform for multiple groups and agencies to collaborate and progress more rapidly.
Modelling and Simulation for Requirements Engineering and Options Analysis
2010-05-01
should be performed to work successfully in the domain; and process-based techniques model the processes that occur in the work domain. There is a crisp ...acad/sed/sedres/ dm /erg/cwa. DRDC Toronto CR 2010-049 39 23. Can the current technique for developing simulation models for assessments
Two-dimensional time-dependent modelling of fume formation in a pulsed gas metal arc welding process
NASA Astrophysics Data System (ADS)
Boselli, M.; Colombo, V.; Ghedini, E.; Gherardi, M.; Sanibondi, P.
2013-06-01
Fume formation in a pulsed gas metal arc welding (GMAW) process is investigated by coupling a time-dependent axi-symmetric two-dimensional model, which takes into account both droplet detachment and production of metal vapour, with a model for fume formation and transport based on the method of moments for the solution of the aerosol general dynamic equation. We report simulative results of a pulsed process (peak current = 350 A, background current 30 A, period = 9 ms) for a 1 mm diameter iron wire, with Ar shielding gas. Results showed that metal vapour production occurs mainly at the wire tip, whereas fume formation is concentrated in the fringes of the arc in the spatial region close to the workpiece, where metal vapours are transported by convection. The proposed modelling approach allows time-dependent tracking of fumes also in plasma processes where temperature-time variations occur faster than nanoparticle transport from the nucleation region to the surrounding atmosphere, as is the case for most pulsed GMAW processes.
2009-09-01
the most efficient model is developed and validated by applying it to the current IA C&A process flow at the TSO-KC. Finally... models are explored using the Knowledge Value Added (KVA) methodology, and the most efficient model is developed and validated by applying it to the ... models requires only one available actor from its respective group , rather than all actors in the group , to
Automated MRI segmentation for individualized modeling of current flow in the human head.
Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C
2013-12-01
High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
Current State of the Art Historic Building Information Modelling
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2017-08-01
In an extensive review of existing literature a number of observations were made in relation to the current approaches for recording and modelling existing buildings and environments: Data collection and pre-processing techniques are becoming increasingly automated to allow for near real-time data capture and fast processing of this data for later modelling applications. Current BIM software is almost completely focused on new buildings and has very limited tools and pre-defined libraries for modelling existing and historic buildings. The development of reusable parametric library objects for existing and historic buildings supports modelling with high levels of detail while decreasing the modelling time. Mapping these parametric objects to survey data, however, is still a time-consuming task that requires further research. Promising developments have been made towards automatic object recognition and feature extraction from point clouds for as-built BIM. However, results are currently limited to simple and planar features. Further work is required for automatic accurate and reliable reconstruction of complex geometries from point cloud data. Procedural modelling can provide an automated solution for generating 3D geometries but lacks the detail and accuracy required for most as-built applications in AEC and heritage fields.
Curing conditions to inactivate Trichinella spiralis muscle larvae in ready-to-eat pork sausage
USDA-ARS?s Scientific Manuscript database
Curing processes for ready to eat (RTE) pork products currently require individual validation of methods to demonstrate inactivation of Trichinella spiralis. This is a major undertaking for each process; currently no model of meat chemistry exists that can be correlated with inactivation of Trichin...
ERIC Educational Resources Information Center
Ricco, Robert B.; Overton, Willis F.
2011-01-01
Many current psychological models of reasoning minimize the role of deductive processes in human thought. In the present paper, we argue that deduction is an important part of ordinary cognition and we propose that a dual systems Competence [image omitted] Procedural processing model conceptualized within relational developmental systems theory…
Nijp, Jelmer J; Metselaar, Klaas; Limpens, Juul; Teutschbein, Claudia; Peichl, Matthias; Nilsson, Mats B; Berendse, Frank; van der Zee, Sjoerd E A T M
2017-02-15
The water content of the topsoil is one of the key factors controlling biogeochemical processes, greenhouse gas emissions and biosphere - atmosphere interactions in many ecosystems, particularly in northern peatlands. In these wetland ecosystems, the water content of the photosynthetic active peatmoss layer is crucial for ecosystem functioning and carbon sequestration, and is sensitive to future shifts in rainfall and drought characteristics. Current peatland models differ in the degree in which hydrological feedbacks are included, but how this affects peatmoss drought projections is unknown. The aim of this paper was to systematically test whether the level of hydrological detail in models could bias projections of water content and drought stress for peatmoss in northern peatlands using downscaled projections for rainfall and potential evapotranspiration in the current (1991-2020) and future climate (2061-2090). We considered four model variants that either include or exclude moss (rain)water storage and peat volume change, as these are two central processes in the hydrological self-regulation of peatmoss carpets. Model performance was validated using field data of a peatland in northern Sweden. Including moss water storage as well as peat volume change resulted in a significant improvement of model performance, despite the extra parameters added. The best performance was achieved if both processes were included. Including moss water storage and peat volume change consistently reduced projected peatmoss drought frequency with >50%, relative to the model excluding both processes. Projected peatmoss drought frequency in the growing season was 17% smaller under future climate than current climate, but was unaffected by including the hydrological self-regulating processes. Our results suggest that ignoring these two fine-scale processes important in hydrological self-regulation of northern peatlands will have large consequences for projected climate change impact on ecosystem processes related to topsoil water content, such as greenhouse gas emissions. Copyright © 2016 Elsevier B.V. All rights reserved.
Framework for Design of Traceability System on Organic Rice Certification
NASA Astrophysics Data System (ADS)
Purwandoko, P. B.; Seminar, K. B.; Sutrisno; Sugiyanta
2018-05-01
Nowadays, the preferences of organic products such as organic rice have been increased. It because of the people awareness of the healthy and eco-friendly food product consumption has grown. Therefore, it is very important to ensure organic quality of the product that will be produced. Certification is a series of process that holds to ensure the quality of products meets all criteria of organic standards. Currently, there is a problem that traceability information system for organic rice certification has been not available. The current system still conducts manually caused the loss of information during storage process. This paper aimed at developing a traceability framework on organic rice certification process. First, the main discussed issues are organic certification process. Second, unified modeling language (UML) is used to build the model of user requirement in order to develop traceability system for all actors in the certification process. Furthermore, the information captured model along certification process will be explained in this paper. The model shows the information flow that has to be recorded for each actor. Finally, the challenges in the implementation system will be discussed in this paper.
GREENSCOPE: A Method for Modeling Chemical Process Sustainability
Current work within the U.S. Environmental Protection Agency’s National Risk Management Research Laboratory is focused on the development of a method for modeling chemical process sustainability. The GREENSCOPE methodology, defined for the four bases of Environment, Economics, Ef...
On the theory of coronal heating mechanisms
NASA Technical Reports Server (NTRS)
Kuperus, M.; Ionson, J. A.; Spicer, D. S.
1980-01-01
Theoretical models describing solar coronal heating mechanisms are reviewed in some detail. The requirements of chromospheric and coronal heating are discussed in the context of the fundamental constraints encountered in modelling the outer solar atmosphere. Heating by acoustic processes in the 'nonmagnetic' parts of the atmosphere is examined with particular emphasis on the shock wave theory. Also discussed are theories of heating by electrodynamic processes in the magnetic regions of the corona, either magnetohydrodynamic waves or current heating in the regions with large electric current densities (flare type heating). Problems associated with each of the models are addressed.
2014-04-30
cost to acquire systems as design maturity could be verified incrementally as the system was developed vice waiting for specific large “ big bang ...Framework (MBAF) be applied to simulate or optimize process variations on programs? LSI Roles and Responsibilities A review of the roles and...the model/process optimization process. It is the current intent that NAVAIR will use the model to run simulations on process changes in an attempt to
MRMAide: a mixed resolution modeling aide
NASA Astrophysics Data System (ADS)
Treshansky, Allyn; McGraw, Robert M.
2002-07-01
The Mixed Resolution Modeling Aide (MRMAide) technology is an effort to semi-automate the implementation of Mixed Resolution Modeling (MRM). MRMAide suggests ways of resolving differences in fidelity and resolution across diverse modeling paradigms. The goal of MRMAide is to provide a technology that will allow developers to incorporate model components into scenarios other than those for which they were designed. Currently, MRM is implemented by hand. This is a tedious, error-prone, and non-portable process. MRMAide, in contrast, will automatically suggest to a developer where and how to connect different components and/or simulations. MRMAide has three phases of operation: pre-processing, data abstraction, and validation. During pre-processing the components to be linked together are evaluated in order to identify appropriate mapping points. During data abstraction those mapping points are linked via data abstraction algorithms. During validation developers receive feedback regarding their newly created models relative to existing baselined models. The current work presents an overview of the various problems encountered during MRM and the various technologies utilized by MRMAide to overcome those problems.
Model Calibration in Watershed Hydrology
NASA Technical Reports Server (NTRS)
Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh
2009-01-01
Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.
The maturing of the quality improvement paradigm in the SEL
NASA Technical Reports Server (NTRS)
Basili, Victor R.
1993-01-01
The Software Engineering Laboratory uses a paradigm for improving the software process and product, called the quality improvement paradigm. This paradigm has evolved over the past 18 years, along with our software development processes and product. Since 1976, when we first began the SEL, we have learned a great deal about improving the software process and product, making a great many mistakes along the way. Quality improvement paradigm, as it is currently defined, can be broken up into six steps: characterize the current project and its environment with respect to the appropriate models and metrics; set the quantifiable goals for successful project performance and improvement; choose the appropriate process model and supporting methods and tools for this project; execute the processes, construct the products, and collect, validate, and analyze the data to provide real-time feedback for corrective action; analyze the data to evaluate the current practices, determine problems, record findings, and make recommendations for future project improvements; and package the experience gained in the form of updated and refined models and other forms of structured knowledge gained from this and prior projects and save it in an experience base to be reused on future projects.
NASA Astrophysics Data System (ADS)
Lu, Mark; Liang, Curtis; King, Dion; Melvin, Lawrence S., III
2005-11-01
Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.
Single neutral pion production by charged-current $$\\bar{\
Le, T.; Paomino, J. L.; Aliaga, L.; ...
2015-10-07
We studied single neutral pion production via muon antineutrino charged-current interactions in plastic scintillator (CH) using the MINERvA detector exposed to the NuMI low-energy, wideband antineutrino beam at Fermilab. Measurement of this process constrains models of neutral pion production in nuclei, which is important because the neutral-current analog is a background for appearance oscillation experiments. Furthermore, the differential cross sections for π 0 momentum and production angle, for events with a single observed π 0 and no charged pions, are presented and compared to model predictions. These results comprise the first measurement of the π 0 kinematics for this process.
Single neutral pion production by charged-current $$\\bar{\
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le, T.; Paomino, J. L.; Aliaga, L.
We studied single neutral pion production via muon antineutrino charged-current interactions in plastic scintillator (CH) using the MINERvA detector exposed to the NuMI low-energy, wideband antineutrino beam at Fermilab. Measurement of this process constrains models of neutral pion production in nuclei, which is important because the neutral-current analog is a background for appearance oscillation experiments. Furthermore, the differential cross sections for π 0 momentum and production angle, for events with a single observed π 0 and no charged pions, are presented and compared to model predictions. These results comprise the first measurement of the π 0 kinematics for this process.
Dark forces coupled to nonconserved currents
NASA Astrophysics Data System (ADS)
Dror, Jeff A.; Lasenby, Robert; Pospelov, Maxim
2017-10-01
New light vectors with dimension-4 couplings to Standard Model states have (energy/vectormass)2-enhanced production rates unless the current they couple to is conserved. These processes allow us to derive new constraints on the couplings of such vectors, that are significantly stronger than the previous literature for a wide variety of models. Examples include vectors with axial couplings to quarks and vectors coupled to currents (such as baryon number) that are only broken by the chiral anomaly. Our new limits arise from a range of processes, including rare Z decays and flavor-changing meson decays, and rule out a number of phenomenologically motivated proposals.
1986-09-01
differentiation between the systems. This study will investigate an appropriate Order Processing and Management Information System (OP&MIS) to link base-level...methodology: 1. Reviewed the current order processing and information model of the TUAF Logistics System. (centralized-manual model) 2. Described the...RDS program’s order processing and information system. (centralized-computerized model) 3. Described the order irocessing and information system of
NASA Astrophysics Data System (ADS)
Boutaghane, A.; Bouhadef, K.; Valensi, F.; Pellerin, S.; Benkedda, Y.
2011-04-01
This paper presents results of theoretical and experimental investigation of the welding arc in Gas Tungsten Arc Welding (GTAW) and Gas Metal Arc Welding (GMAW) processes. A theoretical model consisting in simultaneous resolution of the set of conservation equations for mass, momentum, energy and current, Ohm's law and Maxwell equation is used to predict temperatures and current density distribution in argon welding arcs. A current density profile had to be assumed over the surface of the cathode as a boundary condition in order to make the theoretical calculations possible. In stationary GTAW process, this assumption leads to fair agreement with experimental results reported in literature with maximum arc temperatures of ~21 000 K. In contrast to the GTAW process, in GMAW process, the electrode is consumable and non-thermionic, and a realistic boundary condition of the current density is lacking. For establishing this crucial boundary condition which is the current density in the anode melting electrode, an original method is setup to enable the current density to be determined experimentally. High-speed camera (3000 images/s) is used to get geometrical dimensions of the welding wire used as anode. The total area of the melting anode covered by the arc plasma being determined, the current density at the anode surface can be calculated. For a 330 A arc, the current density at the melting anode surface is found to be of 5 × 107 A m-2 for a 1.2 mm diameter welding electrode.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-01
... dimensional measurement of the holes, and doing corrective actions if necessary; doing an eddy current... dimensional measurement of the holes, doing an eddy current inspection of the holes for cracking, doing a cold... the effective date of this AD, prior to doing any cold working process, determine if an eddy current...
Monitoring autocorrelated process: A geometric Brownian motion process approach
NASA Astrophysics Data System (ADS)
Li, Lee Siaw; Djauhari, Maman A.
2013-09-01
Autocorrelated process control is common in today's modern industrial process control practice. The current practice of autocorrelated process control is to eliminate the autocorrelation by using an appropriate model such as Box-Jenkins models or other models and then to conduct process control operation based on the residuals. In this paper we show that many time series are governed by a geometric Brownian motion (GBM) process. Therefore, in this case, by using the properties of a GBM process, we only need an appropriate transformation and model the transformed data to come up with the condition needs in traditional process control. An industrial example of cocoa powder production process in a Malaysian company will be presented and discussed to illustrate the advantages of the GBM approach.
Bottom friction. A practical approach to modelling coastal oceanography
NASA Astrophysics Data System (ADS)
Bolanos, Rodolfo; Jensen, Palle; Kofoed-Hansen, Henrik; Tornsfeldt Sørensen, Jacob
2017-04-01
Coastal processes imply the interaction of the atmosphere, the sea, the coastline and the bottom. The spatial gradients in this area are normally large, induced by orographic and bathymetric features. Although nowadays it is possible to obtain high-resolution bathymetry, the details of the seabed, e.g. sediment type, presence of biological material and living organisms are not available. Additionally, these properties as well as bathymetry can also be highly dynamic. These bottom characteristics are very important to describe the boundary layer of currents and waves and control to a large degree the dissipation of flows. The bottom friction is thus typically a calibration parameter in numerical modelling of coastal processes. In this work, we assess this process and put it into context of other physical processes uncertainties influencing wind-waves and currents in the coastal areas. A case study in the North Sea is used, particularly the west coast of Denmark, where water depth of less than 30 m cover a wide fringe along the coast, where several offshore wind farm developments are being carried out. We use the hydrodynamic model MIKE 21 HD and the spectral wave model MIKE 21 SW to simulate atmosphere and tidal induced flows and the wind wave generation and propagation. Both models represent state of the art and have been developed for flexible meshes, ideal for coastal oceanography as they can better represent coastlines and allow a variable spatial resolution within the domain. Sensitivity tests to bottom friction formulations are carried out into context of other processes (e.g. model forcing uncertainties, wind and wave interactions, wind drag coefficient). Additionally, a map of varying bottom properties is generated based on a literature survey to explore the impact of the spatial variability. Assessment of different approaches is made in order to establish a best practice regarding bottom friction and coastal oceanographic modelling. Its contribution is also assessed during storm conditions, where its most evident impact is expected as waves are affected by the bottom processes in larger areas, making bottom dissipation more efficient. We use available waves and current measurements in the North Sea (e.g. Ekofisk, Fino platforms and some other coastal stations at the west coast of Denmark) to quantify the importance of processes influencing waves and currents in the coastal zone and putting it in the context of the importance of bottom friction and other processes uncertainties.
[Neither Descartes nor Freud? current pain models in psychosomatic medicine].
Egloff, N; Egle, U T; von Känel, R
2008-05-14
Models explaining chronic pain based on the mere presence or absence of peripheral somatic findings or which view pain of psychological origin when there is no somatic explanation, have their shortcomings. Current scientific knowledge calls for distinct pain concepts, which integrate neurobiological and neuropsychological aspects of pain processing.
NASA Technical Reports Server (NTRS)
Williams, R. M.; Ryan, M. A.; Saipetch, C.; LeDuc, H. G.
1996-01-01
The exchange current observed at porous metal electrodes on sodium or potassium beta -alumina solid electrolytes in alkali metal vapor is quantitatively modeled with a multi-step process with good agreement with experimental results.
Allostasis and the human brain: Integrating models of stress from the social and life sciences
Ganzel, Barbara L.; Morris, Pamela A.; Wethington, Elaine
2009-01-01
We draw on the theory of allostasis to develop an integrative model of the current stress process that highlights the brain as a dynamically adapting interface between the changing environment and the biological self. We review evidence that the core emotional regions of the brain constitute the primary mediator of the well-established association between stress and health, as well as the neural focus of “wear and tear” due to ongoing adaptation. This mediation, in turn, allows us to model the interplay over time between context, current stressor exposure, internal regulation of bodily processes, and health outcomes. We illustrate how this approach facilitates the integration of current findings in human neuroscience and genetics with key constructs from stress models from the social and life sciences, with implications for future research and the design of interventions targeting individuals at risk. PMID:20063966
Wang, Yi; Lee, Sui Mae; Dykes, Gary
2015-01-01
Bacterial attachment to abiotic surfaces can be explained as a physicochemical process. Mechanisms of the process have been widely studied but are not yet well understood due to their complexity. Physicochemical processes can be influenced by various interactions and factors in attachment systems, including, but not limited to, hydrophobic interactions, electrostatic interactions and substratum surface roughness. Mechanistic models and control strategies for bacterial attachment to abiotic surfaces have been established based on the current understanding of the attachment process and the interactions involved. Due to a lack of process control and standardization in the methodologies used to study the mechanisms of bacterial attachment, however, various challenges are apparent in the development of models and control strategies. In this review, the physicochemical mechanisms, interactions and factors affecting the process of bacterial attachment to abiotic surfaces are described. Mechanistic models established based on these parameters are discussed in terms of their limitations. Currently employed methods to study these parameters and bacterial attachment are critically compared. The roles of these parameters in the development of control strategies for bacterial attachment are reviewed, and the challenges that arise in developing mechanistic models and control strategies are assessed.
Lübken, Manfred; Gehring, Tito; Wichern, Marc
2010-02-01
The anaerobic fermentation process has achieved growing importance in practice in recent years. Anaerobic fermentation is especially valuable because its end product is methane, a renewable energy source. While the use of renewable energy sources has accelerated substantially in recent years, their potential has not yet been sufficiently exploited. This is especially true for biogas technology. Biogas is created in a multistage process in which different microorganisms use the energy stored in carbohydrates, fats, and proteins for their metabolism. In order to produce biogas, any organic substrate that is microbiologically accessible can be used. The microbiological process in itself is extremely complex and still requires substantial research in order to be fully understood. Technical facilities for the production of biogas are thus generally scaled in a purely empirical manner. The efficiency of the process, therefore, corresponds to the optimum only in the rarest cases. An optimal production of biogas, as well as a stable plant operation requires detailed knowledge of the biochemical processes in the fermenter. The use of mathematical models can help to achieve the necessary deeper understanding of the process. This paper reviews both the history of model development and current state of the art in modeling anaerobic digestion processes.
The College-Choice Process of High Achieving Freshmen: A Comparative Case Study
ERIC Educational Resources Information Center
Dale, Amanda
2010-01-01
The purpose of this study was to examine the college-choice process of high achieving students. Employing current literature and previous research, it combined current models of college choice and the influential factors identified throughout the literature while utilizing the concept of bounded rationality to create a conceptual framework to…
Using eddy currents for noninvasive in vivo pH monitoring for bone tissue engineering.
Beck-Broichsitter, Benedicta E; Daschner, Frank; Christofzik, David W; Knöchel, Reinhard; Wiltfang, Jörg; Becker, Stephan T
2015-03-01
The metabolic processes that regulate bone healing and bone induction in tissue engineering models are not fully understood. Eddy current excitation is widely used in technical approaches and in the food industry. The aim of this study was to establish eddy current excitation for monitoring metabolic processes during heterotopic osteoinduction in vivo. Hydroxyapatite scaffolds were implanted into the musculus latissimus dorsi of six rats. Bone morphogenetic protein 2 (BMP-2) was applied 1 and 2 weeks after implantation. Weekly eddy current excitation measurements were performed. Additionally, invasive pH measurements were obtained from the scaffolds using fiber optic detection devices. Correlations between the eddy current measurements and the metabolic values were calculated. The eddy current measurements and pH values decreased significantly in the first 2 weeks of the study, followed by a steady increase and stabilization at higher levels towards the end of the study. The measurement curves and statistical evaluations indicated a significant correlation between the resonance frequency values of the eddy current excitation measurements and the observed pH levels (p = 0.0041). This innovative technique was capable of noninvasively monitoring metabolic processes in living tissues according to pH values, showing a direct correlation between eddy current excitation and pH in an in vivo tissue engineering model.
Model Identification of Integrated ARMA Processes
ERIC Educational Resources Information Center
Stadnytska, Tetiana; Braun, Simone; Werner, Joachim
2008-01-01
This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…
A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture
NASA Technical Reports Server (NTRS)
Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.
2005-01-01
Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.
Ab initio-aided CALPHAD thermodynamic modeling of the Sn-Pb binary system under current stressing
Lin, Shih-kang; Yeh, Chao-kuei; Xie, Wei; Liu, Yu-chen; Yoshimura, Masahiro
2013-01-01
Soldering is an ancient process, having been developed 5000 years ago. It remains a crucial process with many modern applications. In electronic devices, electric currents pass through solder joints. A new physical phenomenon – the supersaturation of solders under high electric currents – has recently been observed. It involves (1) un-expected supersaturation of the solder matrix phase, and (2) the formation of unusual “ring-shaped” grains. However, the origin of these phenomena is not yet understood. Here we provide a plausible explanation of these phenomena based on the changes in the phase stability of Pb-Sn solders. Ab initio-aided CALPHAD modeling is utilized to translate the electric current-induced effect into the excess Gibbs free energies of the phases. Hence, the phase equilibrium can be shifted by current stressing. The Pb-Sn phase diagrams with and without current stressing clearly demonstrate the change in the phase stabilities of Pb-Sn solders under current stressing. PMID:24060995
ERIC Educational Resources Information Center
Villacañas de Castro, Luis S.
2016-01-01
This article aims to apply Stenhouse's process model of curriculum to foreign language (FL) education, a model which is characterized by enacting "principles of procedure" which are specific to the discipline which the school subject belongs to. Rather than to replace or dissolve current approaches to FL teaching and curriculum…
Problems with Current Models of Grieving and Consequences for Older Persons.
ERIC Educational Resources Information Center
Horacek, Bruce J.
Classical models of the grieving process include Freud's concept of withdrawal of ties to the love object called decathexis, and Lindemann's emancipation from the bondage to the deceased involving adjusting to the loss in one's environment and the ability to form new relationships. Most of the models and explanations of the grieving process over…
Thomas W. Bonnot; Frank R. Thompson; Joshua J. Millspaugh
2017-01-01
The increasing need to predict how climate change will impact wildlife species has exposed limitations in how well current approaches model important biological processes at scales at which those processes interact with climate. We used a comprehensive approach that combined recent advances in landscape and population modeling into dynamic-landscape metapopulation...
Updraft Fixed Bed Gasification Aspen Plus Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
2007-09-27
The updraft fixed bed gasification model provides predictive modeling capabilities for updraft fixed bed gasifiers, when devolatilization data is available. The fixed bed model is constructed using Aspen Plus, process modeling software, coupled with a FORTRAN user kinetic subroutine. Current updraft gasification models created in Aspen Plus have limited predictive capabilities and must be "tuned" to reflect a generalized gas composition as specified in literature or by the gasifier manufacturer. This limits the applicability of the process model.
NASA Astrophysics Data System (ADS)
Adeogun, Abideen Idowu; Balakrishnan, Ramesh Babu
2017-07-01
Electrocoagulation was used for the removal of basic dye rhodamine B from aqueous solution, and the process was carried out in a batch electrochemical cell with steel electrodes in monopolar connection. The effects of some important parameters such as current density, pH, temperature and initial dye concentration, on the process, were investigated. Equilibrium was attained after 10 min at 30 °C. Pseudo-first-order, pseudo-second-order, Elovich and Avrami kinetic models were used to test the experimental data in order to elucidate the kinetic adsorption process; pseudo-first-order and Avrami models best fitted the data. Experimental data were analysed using six model equations: Langmuir, Freudlinch, Redlich-Peterson, Temkin, Dubinin-Radushkevich and Sips isotherms and it was found that the data fitted well with Sips isotherm model. The study showed that the process depends on current density, temperature, pH and initial dye concentration. The calculated thermodynamics parameters (Δ G°, Δ H° and Δ S°) indicated that the process is spontaneous and endothermic in nature.
NASA Astrophysics Data System (ADS)
Reisgen, Uwe; Schleser, Markus; Mokrov, Oleg; Zabirov, Alexander
2011-06-01
A two dimensional transient numerical analysis and computational module for simulation of electrical and thermal characteristics during electrode melting and metal transfer involved in Gas-Metal-Arc-Welding (GMAW) processes is presented. Solution of non-linear transient heat transfer equation is carried out using a control volume finite difference technique. The computational module also includes controlling and regulation algorithms of industrial welding power sources. The simulation results are the current and voltage waveforms, mean voltage drops at different parts of circuit, total electric power, cathode, anode and arc powers and arc length. We describe application of the model for normal process (constant voltage) and for pulsed processes with U/I and I/I-modulation modes. The comparisons with experimental waveforms of current and voltage show that the model predicts current, voltage and electric power with a high accuracy. The model is used in simulation package SimWeld for calculation of heat flux into the work-piece and the weld seam formation. From the calculated heat flux and weld pool sizes, an equivalent volumetric heat source according to Goldak model, can be generated. The method was implemented and investigated with the simulation software SimWeld developed by the ISF at RWTH Aachen University.
ARTSN: An Automated Real-Time Spacecraft Navigation System
NASA Technical Reports Server (NTRS)
Burkhart, P. Daniel; Pollmeier, Vincent M.
1996-01-01
As part of the Deep Space Network (DSN) advanced technology program an effort is underway to design a filter to automate the deep space navigation process.The automated real-time spacecraft navigation (ARTSN) filter task is based on a prototype consisting of a FORTRAN77 package operating on an HP-9000/700 workstation running HP-UX 9.05. This will be converted to C, and maintained as the operational version. The processing tasks required are: (1) read a measurement, (2) integrate the spacecraft state to the current measurement time, (3) compute the observable based on the integrated state, and (4) incorporate the measurement information into the state using an extended Kalman filter. This filter processes radiometric data collected by the DSN. The dynamic (force) models currently include point mass gravitational terms for all planets, the Sun and Moon, solar radiation pressure, finite maneuvers, and attitude maintenance activity modeled quadratically. In addition, observable errors due to troposphere are included. Further data types, force and observable models will be ncluded to enhance the accuracy of the models and the capability of the package. The heart of the ARSTSN is a currently available continuous-discrete extended Kalman filter. Simulated data used to test the implementation at various stages of development and the results from processing actual mission data are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, D.; Edwards, T.
High-level waste (HLW) throughput (i.e., the amount of waste processed per unit of time) is primarily a function of two critical parameters: waste loading (WL) and melt rate. For the Defense Waste Processing Facility (DWPF), increasing HLW throughput would significantly reduce the overall mission life cycle costs for the Department of Energy (DOE). Significant increases in waste throughput have been achieved at DWPF since initial radioactive operations began in 1996. Key technical and operational initiatives that supported increased waste throughput included improvements in facility attainment, the Chemical Processing Cell (CPC) flowsheet, process control models and frit formulations. As a resultmore » of these key initiatives, DWPF increased WLs from a nominal 28% for Sludge Batch 2 (SB2) to {approx}34 to 38% for SB3 through SB6 while maintaining or slightly improving canister fill times. Although considerable improvements in waste throughput have been obtained, future contractual waste loading targets are nominally 40%, while canister production rates are also expected to increase (to a rate of 325 to 400 canisters per year). Although implementation of bubblers have made a positive impact on increasing melt rate for recent sludge batches targeting WLs in the mid30s, higher WLs will ultimately make the feeds to DWPF more challenging to process. Savannah River Remediation (SRR) recently requested the Savannah River National Laboratory (SRNL) to perform a paper study assessment using future sludge projections to evaluate whether the current Process Composition Control System (PCCS) algorithms would provide projected operating windows to allow future contractual WL targets to be met. More specifically, the objective of this study was to evaluate future sludge batch projections (based on Revision 16 of the HLW Systems Plan) with respect to projected operating windows using current PCCS models and associated constraints. Based on the assessments, the waste loading interval over which a glass system (i.e., a projected sludge composition with a candidate frit) is predicted to be acceptable can be defined (i.e., the projected operating window) which will provide insight into the ability to meet future contractual WL obligations. In this study, future contractual WL obligations are assumed to be 40%, which is the goal after all flowsheet enhancements have been implemented to support DWPF operations. For a system to be considered acceptable, candidate frits must be identified that provide access to at least 40% WL while accounting for potential variation in the sludge resulting from differences in batch-to-batch transfers into the Sludge Receipt and Adjustment Tank (SRAT) and/or analytical uncertainties. In more general terms, this study will assess whether or not the current glass formulation strategy (based on the use of the Nominal and Variation Stage assessments) and current PCCS models will allow access to compositional regions required to targeted higher WLs for future operations. Some of the key questions to be considered in this study include: (1) If higher WLs are attainable with current process control models, are the models valid in these compositional regions? If the higher WL glass regions are outside current model development or validation ranges, is there existing data that could be used to demonstrate model applicability (or lack thereof)? If not, experimental data may be required to revise current models or serve as validation data with the existing models. (2) Are there compositional trends in frit space that are required by the PCCS models to obtain access to these higher WLs? If so, are there potential issues with the compositions of the associated frits (e.g., limitations on the B{sub 2}O{sub 3} and/or Li{sub 2}O concentrations) as they are compared to model development/validation ranges or to the term 'borosilicate' glass? If limitations on the frit compositional range are realized, what is the impact of these restrictions on other glass properties such as the ability to suppress nepheline formation or influence melt rate? The model based assessments being performed make the assumption that the process control models are applicable over the glass compositional regions being evaluated. Although the glass compositional region of interest is ultimately defined by the specific frit, sludge, and WL interval used, there is no prescreening of these compositional regions with respect to the model development or validation ranges which is consistent with current DWPF operations.« less
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
NASA Astrophysics Data System (ADS)
Sotner, R.; Kartci, A.; Jerabek, J.; Herencsar, N.; Dostal, T.; Vrba, K.
2012-12-01
Several behavioral models of current active elements for experimental purposes are introduced in this paper. These models are based on commercially available devices. They are suitable for experimental tests of current- and mixed-mode filters, oscillators, and other circuits (employing current-mode active elements) frequently used in analog signal processing without necessity of onchip fabrication of proper active element. Several methods of electronic control of intrinsic resistance in the proposed behavioral models are discussed. All predictions and theoretical assumptions are supported by simulations and experiments. This contribution helps to find a cheaper and more effective way to preliminary laboratory tests without expensive on-chip fabrication of special active elements.
TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY
Somogyi, Endre; Hagar, Amit; Glazier, James A.
2017-01-01
Living tissues are dynamic, heterogeneous compositions of objects, including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes. Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology (CCOPM) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models. PMID:29282379
TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY.
Somogyi, Endre; Hagar, Amit; Glazier, James A
2016-12-01
Living tissues are dynamic, heterogeneous compositions of objects , including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes . Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology ( CCOPM ) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models.
Sakoda, Lori C; Henderson, Louise M; Caverly, Tanner J; Wernli, Karen J; Katki, Hormuzd A
2017-12-01
Risk prediction models may be useful for facilitating effective and high-quality decision-making at critical steps in the lung cancer screening process. This review provides a current overview of published lung cancer risk prediction models and their applications to lung cancer screening and highlights both challenges and strategies for improving their predictive performance and use in clinical practice. Since the 2011 publication of the National Lung Screening Trial results, numerous prediction models have been proposed to estimate the probability of developing or dying from lung cancer or the probability that a pulmonary nodule is malignant. Respective models appear to exhibit high discriminatory accuracy in identifying individuals at highest risk of lung cancer or differentiating malignant from benign pulmonary nodules. However, validation and critical comparison of the performance of these models in independent populations are limited. Little is also known about the extent to which risk prediction models are being applied in clinical practice and influencing decision-making processes and outcomes related to lung cancer screening. Current evidence is insufficient to determine which lung cancer risk prediction models are most clinically useful and how to best implement their use to optimize screening effectiveness and quality. To address these knowledge gaps, future research should be directed toward validating and enhancing existing risk prediction models for lung cancer and evaluating the application of model-based risk calculators and its corresponding impact on screening processes and outcomes.
Models of recognition: a review of arguments in favor of a dual-process account.
Diana, Rachel A; Reder, Lynne M; Arndt, Jason; Park, Heekyeong
2006-02-01
The majority of computationally specified models of recognition memory have been based on a single-process interpretation, claiming that familiarity is the only influence on recognition. There is increasing evidence that recognition is, in fact, based on two processes: recollection and familiarity. This article reviews the current state of the evidence for dual-process models, including the usefulness of the remember/know paradigm, and interprets the relevant results in terms of the source of activation confusion (SAC) model of memory. We argue that the evidence from each of the areas we discuss, when combined, presents a strong case that inclusion of a recollection process is necessary. Given this conclusion, we also argue that the dual-process claim that the recollection process is always available is, in fact, more parsimonious than the single-process claim that the recollection process is used only in certain paradigms. The value of a well-specified process model such as the SAC model is discussed with regard to other types of dual-process models.
Oguz, Temel; Macias, Diego; Tintore, Joaquin
2015-01-01
Buoyancy-induced unstable boundary currents and the accompanying retrograde density fronts are often the sites of pronounced mesoscale activity, ageostrophic frontal processes, and associated high biological production in marginal seas. Biophysical model simulations of the Catalano-Balearic Sea (Western Mediterranean) illustrated that the unstable and nonlinear southward frontal boundary current along the Spanish coast resulted in a strain-driven frontogenesis mechanism. High upwelling velocities of up to 80 m d-1 injected nutrients into the photic layer and promoted enhanced production on the less dense, onshore side of the front characterized by negative relative vorticity. Additional down-front wind stress and heat flux (cooling) intensified boundary current instabilities and thus ageostrophic cross-frontal circulation and augmented production. Specifically, entrainment of nutrients by relatively strong buoyancy-induced vertical mixing gave rise to a more widespread phytoplankton biomass distribution within the onshore side of the front. Mesoscale cyclonic eddies contributed to production through an eddy pumping mechanism, but it was less effective and more limited regionally than the frontal processes. The model was configured for the Catalano-Balearic Sea, but the mechanisms and model findings apply to other marginal seas with similar unstable frontal boundary current systems. PMID:26065688
Structural Equation Modeling of Multivariate Time Series
ERIC Educational Resources Information Center
du Toit, Stephen H. C.; Browne, Michael W.
2007-01-01
The covariance structure of a vector autoregressive process with moving average residuals (VARMA) is derived. It differs from other available expressions for the covariance function of a stationary VARMA process and is compatible with current structural equation methodology. Structural equation modeling programs, such as LISREL, may therefore be…
Currently used dispersion models, such as the AMS/EPA Regulatory Model (AERMOD), process routinely available meteorological observations to construct model inputs. Thus, model estimates of concentrations depend on the availability and quality of Meteorological observations, as we...
Origin of resistivity in reconnection
NASA Astrophysics Data System (ADS)
Treumann, Rudolf A.
2001-06-01
Resistivity is believed to play an important role in reconnection leading to the distinction between resistive and collisionless reconnection. The former is treated in the Sweet-Parker model of long current sheets, and the Petschek model of a small resistive region. Both models in spite of their different dynamics attribute to the violation of the frozen-in condition in their diffusion regions due to the action of resistivity. In collisionless reconnection there is little consensus about the processes breaking the frozen-in condition. The question is whether anomalous processes generate sufficient resistivity or whether other processes free the particles from slavery by the magnetic field. In the present paper we review processes that may cause anomalous resistivity in collisionless current sheets. Our general conclusion is that in space plasma boundaries accessible to in situ spacecraft, wave levels have always been found to be high enough to explain the existence of large enough local diffusivity for igniting local reconnection. However, other processes might take place as well. Non-resistive reconnection can be caused by inertia or diamagnetism.
Identity in agent-based models : modeling dynamic multiscale social processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozik, J.; Sallach, D. L.; Macal, C. M.
Identity-related issues play central roles in many current events, including those involving factional politics, sectarianism, and tribal conflicts. Two popular models from the computational-social-science (CSS) literature - the Threat Anticipation Program and SharedID models - incorporate notions of identity (individual and collective) and processes of identity formation. A multiscale conceptual framework that extends some ideas presented in these models and draws other capabilities from the broader CSS literature is useful in modeling the formation of political identities. The dynamic, multiscale processes that constitute and transform social identities can be mapped to expressive structures of the framework
ERIC Educational Resources Information Center
Manne, Sharon; Winkel, Gary; Zaider, Talia; Rubin, Stephen; Hernandez, Enrique; Bergman, Cynthia
2010-01-01
Objective: Little attention has been paid to the role of nonspecific therapy processes in the efficacy of psychological interventions for individuals diagnosed with cancer. The goal of the current study was to examine the three constructs from the generic model of psychotherapy (GMP): therapeutic alliance, therapeutic realizations, and therapeutic…
History of research on modelling gypsy moth population ecology
J. J. Colbert
1991-01-01
History of research to develop models of gypsy moth population dynamics and some related studies are described. Empirical regression-based models are reviewed, and then the more comprehensive process models are discussed. Current model- related research efforts are introduced.
Leaders in Future and Current Technology Teaming Up to Improve Ethanol
and NREL expertise to: Develop improvements in process throughput and water management for dry mill , Complete an overall process engineering model of the dry mill technology that identifies new ways to and operation of "dry mill" plants that currently produce ethanol from corn starch. Dry
Process based modeling of total longshore sediment transport
Haas, K.A.; Hanes, D.M.
2004-01-01
Waves, currents, and longshore sand transport are calculated locally as a function of position in the nearshore region using process based numerical models. The resultant longshore sand transport is then integrated across the nearshore to provide predictions of the total longshore transport of sand due to waves and longshore currents. Model results are in close agreement with the I1-P1 correlation described by Komar and Inman (1970) and the CERC (1984) formula. Model results also indicate that the proportionality constant in the I1-P1 formula depends weakly upon the sediment size, the shape of the beach profile, and the particular local sediment flux formula that is employed. Model results indicate that the various effects and influences of sediment size tend to cancel out, resulting in little overall dependence on sediment size.
Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-01-01
Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977
Automated MRI segmentation for individualized modeling of current flow in the human head
NASA Astrophysics Data System (ADS)
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-12-01
Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
Modelling the transient behaviour of pulsed current tungsten-inert-gas weldpools
NASA Astrophysics Data System (ADS)
Wu, C. S.; Zheng, W.; Wu, L.
1999-01-01
A three-dimensional model is established to simulate the pulsed current tungsten-inert-gas (TIG) welding process. The goal is to analyse the cyclic variation of fluid flow and heat transfer in weldpools under periodic arc heat input. To this end, an algorithm, which is capable of handling the transience, nonlinearity, multiphase and strong coupling encountered in this work, is developed. The numerical simulations demonstrate the transient behaviour of weldpools under pulsed current. Experimental data are compared with numerical results to show the effectiveness of the developed model.
Process modeling for carbon-phenolic nozzle materials
NASA Technical Reports Server (NTRS)
Letson, Mischell A.; Bunker, Robert C.; Remus, Walter M., III; Clinton, R. G.
1989-01-01
A thermochemical model based on the SINDA heat transfer program is developed for carbon-phenolic nozzle material processes. The model can be used to optimize cure cycles and to predict material properties based on the types of materials and the process by which these materials are used to make nozzle components. Chemical kinetic constants for Fiberite MX4926 were determined so that optimization of cure cycles for the current Space Shuttle Solid Rocket Motor nozzle rings can be determined.
ERIC Educational Resources Information Center
Miller, Lee Dee; Shell, Duane; Khandaker, Nobel; Soh, Leen-Kiat
2011-01-01
Computer games have long been used for teaching. Current reviews lack categorization and analysis using learning models which would help instructors assess the usefulness of computer games. We divide the use of games into two classes: game playing and game development. We discuss the Input-Process-Outcome (IPO) model for the learning process when…
Modelling the Air–Surface Exchange of Ammonia from the Field to Global Scale
The Working Group addressed the current understanding and uncertainties in the processes controlling ammonia (NH3) bi-directional exchange, and in the application of numerical models to describe these processes. As a starting point for the discussion, the Working Group drew on th...
New experimental developments for s- and p-process research
NASA Astrophysics Data System (ADS)
Reifarth, R.; Ershova, O.; Glorius, J.; Göbel, K.; Langer, C.; Meusel, O.; Plag, R.; Schmidt, S.; Sonnabend, K.; Heil, M.
2012-12-01
Almost all of the heavy elements are produced via neutron-induced processes in a multitude of stellar production sites. The remaining minor part is produced via photon- and proton-induced reactions. The predictive power of the underlying stellar models is currently limited because they contain poorly constrained physics components such as convection, rotation or magnetic fields. An important tool to determine such components is the comparison of observed with modeled abundance distributions based on improved nuclear physics input. The FRANZ facility at the Goethe University Frankfurt, which is currently under construction will provide unprecedented neutron fluxes and proton currents available for nuclear astrophysics. It will be possible to investigate important branchpoint nuclei of the s-process nucleosynthesis path and proton-induced reactions important for p-process modeling. At the GSI close to Darmstadt radioactive isotopes can be investigated in inverse kinematics. This allows experiments such as proton-induced cross section measurements using a heavy-ion storage ring or measurements of gamma-induced reactions using the Coulomb dissociation method. The future FAIR facility will allow similar experiments on very exotic nuclei, since orders of magnitude higher radioactive ions beams will be possible.
NASA Astrophysics Data System (ADS)
Jia, Shenli; Mo, Yongpeng; Shi, Zongqian; Li, Junliang; Wang, Lijun
2017-10-01
The post-arc dielectric recovery process has a decisive effect on the current interruption performance in a vacuum circuit breaker. The dissipation of residual plasma at the moment of current zero under the transient recovery voltage, which is the first stage of the post-arc dielectric recovery process and forms the post-arc current, has attracted many concerns. A one-dimensional particle-in-cell model is developed to simulate the measured post-arc current in the vacuum circuit breaker in this paper. At first, the parameters of the residual plasma are estimated roughly by the waveform of the post-arc current which is taken from measurements. After that, different components of the post-arc current, which are formed by the movement of charged particles in the residual plasma, are discussed. Then, the residual plasma density is adjusted according to the proportion of electrons and ions absorbed by the post-arc anode derived from the particle-in-cell simulation. After this adjustment, the post-arc current waveform obtained from the simulation is closer to that obtained from measurements.
Top quark rare decays via loop-induced FCNC interactions in extended mirror fermion model
NASA Astrophysics Data System (ADS)
Hung, P. Q.; Lin, Yu-Xiang; Nugroho, Chrisna Setyo; Yuan, Tzu-Chiang
2018-02-01
Flavor changing neutral current (FCNC) interactions for a top quark t decays into Xq with X represents a neutral gauge or Higgs boson, and q a up- or charm-quark are highly suppressed in the Standard Model (SM) due to the Glashow-Iliopoulos-Miami mechanism. Whilst current limits on the branching ratios of these processes have been established at the order of 10-4 from the Large Hadron Collider experiments, SM predictions are at least nine orders of magnitude below. In this work, we study some of these FCNC processes in the context of an extended mirror fermion model, originally proposed to implement the electroweak scale seesaw mechanism for non-sterile right-handed neutrinos. We show that one can probe the process t → Zc for a wide range of parameter space with branching ratios varying from 10-6 to 10-8, comparable with various new physics models including the general two Higgs doublet model with or without flavor violations at tree level, minimal supersymmetric standard model with or without R-parity, and extra dimension model.
Chasing the long tail of environmental data: PEcAn is nuts about Brown Dog
NASA Astrophysics Data System (ADS)
Dietze, M.; Cowdery, E.; Desai, A. R.; Gardella, A.; Kelly, R.; Kooper, R.; LeBauer, D.; Mantooth, J.; McHenry, K.; Serbin, S.; Shiklomanov, A. N.; Simkins, J.; Viskari, T.; Raiho, A.
2015-12-01
The Predictive Ecosystem Analyzer (PEcAn) is a ecological modeling informatics system that manages the flows of information in and out of terrestrial biosphere models, provenance tracking, visualization, analysis, and model-data fusion. We are in the process of scaling the PEcAn system from one that currently supports a handful of models and system nodes to one that aims to provide bottom-up connectivity across much of the model-data integration done by the terrestrial biogeochemistry community. This talk reports on the current state of PEcAn, it's data processing workflows, and the near- and long-term challenges faced. Particular emphasis will be given to the tools being developed by the Brown Dog project to make unstructured, un-curated data more accessible: the Data Access Proxy (DAP) and the Data Tilling Service (DTS). The use of the DAP to process meteorological data and the DTS to read vegetation data will be demonstrated and other Brown Dog environmental case studies will be briefly touched on. Beyond data processing, facilitating data discovery and import into PEcAn and distributing analyses across the PEcAn network (i.e. bringing models to data) are key challenges moving forward.
NASA Astrophysics Data System (ADS)
Yuanyuan, Zhang
The stochastic branching model of multi-particle productions in high energy collision has theoretical basis in perturbative QCD, and also successfully describes the experimental data for a wide energy range. However, over the years, little attention has been put on the branching model for supersymmetric (SUSY) particles. In this thesis, a stochastic branching model has been built to describe the pure supersymmetric particle jets evolution. This model is a modified two-phase stochastic branching process, or more precisely a two phase Simple Birth Process plus Poisson Process. The general case that the jets contain both ordinary particle jets and supersymmetric particle jets has also been investigated. We get the multiplicity distribution of the general case, which contains a Hypergeometric function in its expression. We apply this new multiplicity distribution to the current experimental data of pp collision at center of mass energy √s = 0.9, 2.36, 7 TeV. The fitting shows the supersymmetric particles haven't participate branching at current collision energy.
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
A Combined Experimental and Analytical Modeling Approach to Understanding Friction Stir Welding
NASA Technical Reports Server (NTRS)
Nunes, Arthur C., Jr.; Stewart, Michael B.; Adams, Glynn P.; Romine, Peter
1998-01-01
In the Friction Stir Welding (FSW) process a rotating pin tool joins the sides of a seam by stirring them together. This solid state welding process avoids problems with melting and hot-shortness presented by some difficult-to weld high-performance light alloys. The details of the plastic flow during the process are not well understood and are currently a subject of research. Two candidate models of the FSW process, the Mixed Zone (MZ) and the Single Slip Surface (S3) model are presented and their predictions compared to experimental data.
NASA Astrophysics Data System (ADS)
Avila, Ricardo E.
The process of Friction Stir Welding (FSW) 6061 aluminum alloy is investigated, with focus on the forces and power being applied in the process and the material response. The main objective is to relate measurements of the forces and power applied in the process with mechanical properties of the material during the dynamic process, based on mathematical modeling and aided by computer simulations, using the LS-DYNA software for finite element modeling. Results of measurements of applied forces and power are presented. The result obtained for applied power is used in the construction of a mechanical variational model of FSW, in which minimization of a functional for the applied torque is sought, leading to an expression for shear stress in the material. The computer simulations are performed by application of the Smoothed Particle Hydrodynamics (SPH) method, in which no structured finite element mesh is used to construct a spatial discretization of the model. The current implementation of SPH in LS-DYNA allows a structural solution using a plastic kinematic material model. This work produces information useful to improve understanding of the material flow in the process, and thus adds to current knowledge about the behavior of materials under processes of severe plastic deformation, particularly those processes in which deformation occurs mainly by application of shear stress, aided by thermoplastic strain localization and dynamic recrystallization.
A Unified Model of Cloud-to-Ground Lightning Stroke
NASA Astrophysics Data System (ADS)
Nag, A.; Rakov, V. A.
2014-12-01
The first stroke in a cloud-to-ground lightning discharge is thought to follow (or be initiated by) the preliminary breakdown process which often produces a train of relatively large microsecond-scale electric field pulses. This process is poorly understood and rarely modeled. Each lightning stroke is composed of a downward leader process and an upward return-stroke process, which are usually modeled separately. We present a unified engineering model for computing the electric field produced by a sequence of preliminary breakdown, stepped leader, and return stroke processes, serving to transport negative charge to ground. We assume that a negatively-charged channel extends downward in a stepped fashion through the relatively-high-field region between the main negative and lower positive charge centers and then through the relatively-low-field region below the lower positive charge center. A relatively-high-field region is also assumed to exist near ground. The preliminary breakdown pulse train is assumed to be generated when the negatively-charged channel interacts with the lower positive charge region. At each step, an equivalent current source is activated at the lower extremity of the channel, resulting in a step current wave that propagates upward along the channel. The leader deposits net negative charge onto the channel. Once the stepped leader attaches to ground (upward connecting leader is presently neglected), an upward-propagating return stroke is initiated, which neutralizes the charge deposited by the leader along the channel. We examine the effect of various model parameters, such as step length and current propagation speed, on model-predicted electric fields. We also compare the computed fields with pertinent measurements available in the literature.
A template-based approach for responsibility management in executable business processes
NASA Astrophysics Data System (ADS)
Cabanillas, Cristina; Resinas, Manuel; Ruiz-Cortés, Antonio
2018-05-01
Process-oriented organisations need to manage the different types of responsibilities their employees may have w.r.t. the activities involved in their business processes. Despite several approaches provide support for responsibility modelling, in current Business Process Management Systems (BPMS) the only responsibility considered at runtime is the one related to performing the work required for activity completion. Others like accountability or consultation must be implemented by manually adding activities in the executable process model, which is time-consuming and error-prone. In this paper, we address this limitation by enabling current BPMS to execute processes in which people with different responsibilities interact to complete the activities. We introduce a metamodel based on Responsibility Assignment Matrices (RAM) to model the responsibility assignment for each activity, and a flexible template-based mechanism that automatically transforms such information into BPMN elements, which can be interpreted and executed by a BPMS. Thus, our approach does not enforce any specific behaviour for the different responsibilities but new templates can be modelled to specify the interaction that best suits the activity requirements. Furthermore, libraries of templates can be created and reused in different processes. We provide a reference implementation and build a library of templates for a well-known set of responsibilities.
Cutting, Elizabeth M; Overby, Casey L; Banchero, Meghan; Pollin, Toni; Kelemen, Mark; Shuldiner, Alan R; Beitelshees, Amber L
Delivering genetic test results to clinicians is a complex process. It involves many actors and multiple steps, requiring all of these to work together in order to create an optimal course of treatment for the patient. We used information gained from focus groups in order to illustrate the current process of delivering genetic test results to clinicians. We propose a business process model and notation (BPMN) representation of this process for a Translational Pharmacogenomics Project being implemented at the University of Maryland Medical Center, so that personalized medicine program implementers can identify areas to improve genetic testing processes. We found that the current process could be improved to reduce input errors, better inform and notify clinicians about the implications of certain genetic tests, and make results more easily understood. We demonstrate our use of BPMN to improve this important clinical process for CYP2C19 genetic testing in patients undergoing invasive treatment of coronary heart disease.
Cutting, Elizabeth M.; Overby, Casey L.; Banchero, Meghan; Pollin, Toni; Kelemen, Mark; Shuldiner, Alan R.; Beitelshees, Amber L.
2015-01-01
Delivering genetic test results to clinicians is a complex process. It involves many actors and multiple steps, requiring all of these to work together in order to create an optimal course of treatment for the patient. We used information gained from focus groups in order to illustrate the current process of delivering genetic test results to clinicians. We propose a business process model and notation (BPMN) representation of this process for a Translational Pharmacogenomics Project being implemented at the University of Maryland Medical Center, so that personalized medicine program implementers can identify areas to improve genetic testing processes. We found that the current process could be improved to reduce input errors, better inform and notify clinicians about the implications of certain genetic tests, and make results more easily understood. We demonstrate our use of BPMN to improve this important clinical process for CYP2C19 genetic testing in patients undergoing invasive treatment of coronary heart disease. PMID:26958179
ACTDs: Management Plans as Predictors of Transition
2007-12-01
phase. Figure 2 shows the current ACTD funding model in place today and highlights the challenges involved in the process. Current ACTD Funding ... Model All other Sources (~70%) OSD AS&C Cash Resources (~30%) Army PE x PE x PE x PE x Navy PE x PE x PE x PE x USAF PE x PE x
Detangling Spaghetti: Tracking Deep Ocean Currents in the Gulf of Mexico
ERIC Educational Resources Information Center
Curran, Mary Carla; Bower, Amy S.; Furey, Heather H.
2017-01-01
Creation of physical models can help students learn science by enabling them to be more involved in the scientific process of discovery and to use multiple senses during investigations. This activity achieves these goals by having students model ocean currents in the Gulf of Mexico. In general, oceans play a key role in influencing weather…
Multi-Criteria Approach in Multifunctional Building Design Process
NASA Astrophysics Data System (ADS)
Gerigk, Mateusz
2017-10-01
The paper presents new approach in multifunctional building design process. Publication defines problems related to the design of complex multifunctional buildings. Currently, contemporary urban areas are characterized by very intensive use of space. Today, buildings are being built bigger and contain more diverse functions to meet the needs of a large number of users in one capacity. The trends show the need for recognition of design objects in an organized structure, which must meet current design criteria. The design process in terms of the complex system is a theoretical model, which is the basis for optimization solutions for the entire life cycle of the building. From the concept phase through exploitation phase to disposal phase multipurpose spaces should guarantee aesthetics, functionality, system efficiency, system safety and environmental protection in the best possible way. The result of the analysis of the design process is presented as a theoretical model of the multifunctional structure. Recognition of multi-criteria model in the form of Cartesian product allows to create a holistic representation of the designed building in the form of a graph model. The proposed network is the theoretical base that can be used in the design process of complex engineering systems. The systematic multi-criteria approach makes possible to maintain control over the entire design process and to provide the best possible performance. With respect to current design requirements, there are no established design rules for multifunctional buildings in relation to their operating phase. Enrichment of the basic criteria with functional flexibility criterion makes it possible to extend the exploitation phase which brings advantages on many levels.
Numerical Analysis of Surge Phenomena, Currents, and Pollution Transport in the Sea of Azov
NASA Astrophysics Data System (ADS)
Ivanov, V. A.; Shul'ga, T. Ya.
2018-04-01
Dynamic processes and features of transformation of pollution in the Sea of Azov, caused by the action of a real wind and atmospheric pressure in the presence of stationary currents, are studied using a three-dimensional nonlinear hydrodynamic model. On the basis of numerical calculations, conclusions are reached about the influence of the velocities of stationary background currents on maximal deviations and the velocities of nonstationary currents generated by wind fields in the SKIRON model. It is shown that the combined effect of the constant wind and wind in the SKIRON atmospheric model leads to a significant expansion of the polluted area and to a longer dispersion time compared to the effects of solely stationary currents.
NASA Astrophysics Data System (ADS)
Escobar-Palafox, Gustavo; Gault, Rosemary; Ridgway, Keith
2011-12-01
Shaped Metal Deposition (SMD) is an additive manufacturing process which creates parts layer by layer by weld depositions. In this work, empirical models that predict part geometry (wall thickness and outer diameter) and some metallurgical aspects (i.e. surface texture, portion of finer Widmanstätten microstructure) for the SMD process were developed. The models are based on an orthogonal fractional factorial design of experiments with four factors at two levels. The factors considered were energy level (a relationship between heat source power and the rate of raw material input.), step size, programmed diameter and travel speed. The models were validated using previous builds; the prediction error for part geometry was under 11%. Several relationships between the factors and responses were identified. Current had a significant effect on wall thickness; thickness increases with increasing current. Programmed diameter had a significant effect on percentage of shrinkage; this decreased with increasing component size. Surface finish decreased with decreasing step size and current.
Modeling micro-droplet formation in near-field electrohydrodynamic jet printing
NASA Astrophysics Data System (ADS)
Popell, George Colin
Near-field electrohydrodynamic jet (E-jet) printing has recently gained significant interest within the manufacturing research community because of its ability to produce micro/sub-micron-scale droplets using a wide variety of inks and substrates. However, the process currently operates in open-loop and as a result suffers from unpredictable printing quality. The use of physics-based, control-oriented process models is expected to enable closed-loop control of this printing technique. The objective of this research is to perform a fundamental study of the substrate-side droplet shape-evolution in near-field E-jet printing and to develop a physics-based model of the same that links input parameters such as voltage magnitude and ink properties to the height and diameter of the printed droplet. In order to achieve this objective, a synchronized high-speed imaging and substrate-side current-detection system was used implemented to enable a correlation between the droplet shape parameters and the measured current signal. The experimental data reveals characteristic process signatures and droplet spreading regimes. The results of these studies are then used as the basis for a model that predicts the droplet diameter and height using the measured current signal as the input. A unique scaling factor based on the measured current signal is used in this model instead of relying on empirical scaling laws found in literature. For each of the three inks tested in this study, the average absolute error in the model predictions is under 4.6% for diameter predictions and under 10.6% for height predictions of the steady-state droplet. While printing under non-conducive ambient conditions of low humidity and high temperatures, the use of the environmental correction factor in the model is seen to result in average absolute errors of 10.35% and 12.5% for diameter and height predictions.
Translational research: understanding the continuum from bench to bedside.
Drolet, Brian C; Lorenzi, Nancy M
2011-01-01
The process of translating basic scientific discoveries to clinical applications, and ultimately to public health improvements, has emerged as an important, but difficult, objective in biomedical research. The process is best described as a "translation continuum" because various resources and actions are involved in this progression of knowledge, which advances discoveries from the bench to the bedside. The current model of this continuum focuses primarily on translational research, which is merely one component of the overall translation process. This approach is ineffective. A revised model to address the entire continuum would provide a methodology to identify and describe all translational activities (eg, implementation, adoption translational research, etc) as well their place within the continuum. This manuscript reviews and synthesizes the literature to provide an overview of the current terminology and model for translation. A modification of the existing model is proposed to create a framework called the Biomedical Research Translation Continuum, which defines the translation process and describes the progression of knowledge from laboratory to health gains. This framework clarifies translation for readers who have not followed the evolving and complicated models currently described. Authors and researchers may use the continuum to understand and describe their research better as well as the translational activities within a conceptual framework. Additionally, the framework may increase the advancement of knowledge by refining discussions of translation and allowing more precise identification of barriers to progress. Copyright © 2011 Mosby, Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Holbery, Jim; Smith, Mark T.
2006-11-30
This report describes the status of the current process modeling approaches to predict the behavior and flow of fiber-filled thermoplastics under injection molding conditions. Previously, models have been developed to simulate the injection molding of short-fiber thermoplastics, and an as-formed composite part or component can then be predicted that contains a microstructure resulting from the constituents’ material properties and characteristics as well as the processing parameters. Our objective is to assess these models in order to determine their capabilities and limitations, and the developments needed for long-fiber injection-molded thermoplastics (LFTs). First, the concentration regimes are summarized to facilitate the understandingmore » of different types of fiber-fiber interaction that can occur for a given fiber volume fraction. After the formulation of the fiber suspension flow problem and the simplification leading to the Hele-Shaw approach, the interaction mechanisms are discussed. Next, the establishment of the rheological constitutive equation is presented that reflects the coupled flow/orientation nature. The decoupled flow/orientation approach is also discussed which constitutes a good simplification for many applications involving flows in thin cavities. Finally, before outlining the necessary developments for LFTs, some applications of the current orientation model and the so-called modified Folgar-Tucker model are illustrated through the fiber orientation predictions for selected LFT samples.« less
Flight crew aiding for recovery from subsystem failures
NASA Technical Reports Server (NTRS)
Hudlicka, E.; Corker, K.; Schudy, R.; Baron, Sheldon
1990-01-01
Some of the conceptual issues associated with pilot aiding systems are discussed and an implementation of one component of such an aiding system is described. It is essential that the format and content of the information the aiding system presents to the crew be compatible with the crew's mental models of the task. It is proposed that in order to cooperate effectively, both the aiding system and the flight crew should have consistent information processing models, especially at the point of interface. A general information processing strategy, developed by Rasmussen, was selected to serve as the bridge between the human and aiding system's information processes. The development and implementation of a model-based situation assessment and response generation system for commercial transport aircraft are described. The current implementation is a prototype which concentrates on engine and control surface failure situations and consequent flight emergencies. The aiding system, termed Recovery Recommendation System (RECORS), uses a causal model of the relevant subset of the flight domain to simulate the effects of these failures and to generate appropriate responses, given the current aircraft state and the constraints of the current flight phase. Since detailed information about the aircraft state may not always be available, the model represents the domain at varying levels of abstraction and uses the less detailed abstraction levels to make inferences when exact information is not available. The structure of this model is described in detail.
Modeling Bloch oscillations in ultra-small Josephson junctions
NASA Astrophysics Data System (ADS)
Vora, Heli; Kautz, Richard; Nam, Sae Woo; Aumentado, Jose
In a seminal paper, Likharev et al. developed a theory for ultra-small Josephson junctions with Josephson coupling energy (Ej) less than the charging energy (Ec) and showed that such junctions demonstrate Bloch oscillations which could be used to make a fundamental current standard that is a dual of the Josephson volt standard. Here, based on the model of Geigenmüller and Schön, we numerically calculate the current-voltage relationship of such an ultra-small junction which includes various error processes present in a nanoscale Josephson junction such as random quasiparticle tunneling events and Zener tunneling between bands. This model allows us to explore the parameter space to see the effect of each process on the width and height of the Bloch step and serves as a guide to determine whether it is possible to build a quantum current standard of a metrological precision using Bloch oscillations.
Schofield, Thomas; Beaumont, Kelly; Widaman, Keith; Jochem, Rachel; Robins, Richard; Conger, Rand
2013-01-01
The current study tested elements of the theoretical model of Portes and Rumbaut (1996), which proposes that parent–child differences in English fluency in immigrant families affect various family processes that, in turn, relate to changes in academic success. The current study of 674 Mexican- origin families provided support for the model in that parent–child fluency in a common language was associated with several dimensions of the parent–child relationship, including communication, role reversal, and conflict. In turn, these family processes predicted child academic performance, school problems, and academic aspirations and expectations. The current findings extend the Portes and Rumbaut (1996) model, however, inasmuch as joint fluency in either English or Spanish was associated with better parent–child relationships. The findings have implications for educational and human service issues involving Mexican Americans and other immigrant groups. PMID:23244454
Generation of action potentials in a mathematical model of corticotrophs.
LeBeau, A P; Robson, A B; McKinnon, A E; Donald, R A; Sneyd, J
1997-01-01
Corticotropin-releasing hormone (CRH) is an important regulator of adrenocorticotropin (ACTH) secretion from pituitary corticotroph cells. The intracellular signaling system that underlies this process involves modulation of voltage-sensitive Ca2+ channel activity, which leads to the generation of Ca2+ action potentials and influx of Ca2+. However, the mechanisms by which Ca2+ channel activity is modulated in corticotrophs are not currently known. We investigated this process in a Hodgkin-Huxley-type mathematical model of corticotroph plasma membrane electrical responses. We found that an increase in the L-type Ca2+ current was sufficient to generate action potentials from a previously resting state of the model. The increase in the L-type current could be elicited by either a shift in the voltage dependence of the current toward more negative potentials, or by an increase in the conductance of the current. Although either of these mechanisms is potentially responsible for the generation of action potentials, previous experimental evidence favors the former mechanism, with the magnitude of the shift required being consistent with the experimental findings. The model also shows that the T-type Ca2+ current plays a role in setting the excitability of the plasma membrane, but does not appear to contribute in a dynamic manner to action potential generation. Inhibition of a K+ conductance that is active at rest also affects the excitability of the plasma membrane. PMID:9284294
ERIC Educational Resources Information Center
Stahl, Robert J.; Murphy, Gary T.
Weaknesses in the structure, levels, and sequence of Bloom's taxonomy of cognitive domains emphasize the need for both a new model of how individual learners process information and a new taxonomy of the different levels of memory, thinking, and learning. Both the model and the taxonomy should be consistent with current research findings. The…
Spacecraft Charging Current Balance Model Applied to High Voltage Solar Array Operations
NASA Technical Reports Server (NTRS)
Willis, Emily M.; Pour, Maria Z. A.
2016-01-01
Spacecraft charging induced by high voltage solar arrays can result in power losses and degradation of spacecraft surfaces. In some cases, it can even present safety issues for astronauts performing extravehicular activities. An understanding of the dominant processes contributing to spacecraft charging induced by solar arrays is important to current space missions, such as the International Space Station, and to any future space missions that may employ high voltage solar arrays. A common method of analyzing the factors contributing to spacecraft charging is the current balance model. Current balance models are based on the simple idea that the spacecraft will float to a potential such that the current collecting to the surfaces equals the current lost from the surfaces. However, when solar arrays are involved, these currents are dependent on so many factors that the equation becomes quite complicated. In order for a current balance model to be applied to solar array operations, it must incorporate the time dependent nature of the charging of dielectric surfaces in the vicinity of conductors1-3. This poster will present the factors which must be considered when developing a current balance model for high voltage solar array operations and will compare results of a current balance model with data from the Floating Potential Measurement Unit4 on board the International Space Station.
An integrative process model of leadership: examining loci, mechanisms, and event cycles.
Eberly, Marion B; Johnson, Michael D; Hernandez, Morela; Avolio, Bruce J
2013-09-01
Utilizing the locus (source) and mechanism (transmission) of leadership framework (Hernandez, Eberly, Avolio, & Johnson, 2011), we propose and examine the application of an integrative process model of leadership to help determine the psychological interactive processes that constitute leadership. In particular, we identify the various dynamics involved in generating leadership processes by modeling how the loci and mechanisms interact through a series of leadership event cycles. We discuss the major implications of this model for advancing an integrative understanding of what constitutes leadership and its current and future impact on the field of psychological theory, research, and practice. © 2013 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Jingyi; Wang, G.-X.; Dong, Yalin; Ye, Chang
2017-08-01
Many electrically assisted processes have been reported to induce changes in microstructure and metal plasticity. To understand the physics-based mechanisms behind these interesting phenomena, however, requires an understanding of the interaction between the electric current and heterogeneous microstructure. In this work, multiscale modeling of the electric current flow in a nanocrystalline material is reported. The cellular automata method was used to track the nanoscale grain boundaries in the matrix. Maxwell's electromagnetic equations were solved to obtain the electrical potential distribution at the macro scale. Kirchhoff's circuit equation was solved to obtain the electric current flow at the micro/nano scale. The electric current distribution at two representative locations was investigated. A significant electric current concentration was observed near the grain boundaries, particularly near the triple junctions. This higher localized electric current leads to localized resistive heating near the grain boundaries. The electric current distribution could be used to obtain critical information such as localized resistive heating rate and extra system free energy, which are critical for explaining many interesting phenomena, including microstructure evolution and plasticity enhancement in many electrically assisted processes.
Modeling Earth's Ring Current Using The CIMI Model
NASA Astrophysics Data System (ADS)
Craven, J. D., II; Perez, J. D.; Buzulukova, N.; Fok, M. C. H.
2015-12-01
Earth's ring current is a result of the injection of charged particles trapped in the magnetosphere from solar storms. The enhancement of the ring current particles produces magnetic depressions and disturbances to the Earth's magnetic field known as geomagnetic storms, which have been modeled using the comprehensive inner magnetosphere-ionosphere (CIMI) model. The purpose of this model is to identify and understand the physical processes that control the dynamics of the geomagnetic storms. The basic procedure was to use the CIMI model for the simulation of 15 storms since 2009. Some of the storms were run multiple times, but with varying parameters relating to the dynamics of the Earth's magnetic field, particle fluxes, and boundary conditions of the inner-magnetosphere. Results and images were placed in the TWINS online catalog page for further analysis and discussion. Particular areas of interest were extreme storm events. A majority of storms simulated had average DST values of -100 nT; these extreme storms exceeded DST values of -200 nT. The continued use of the CIMI model will increase knowledge of the interactions and processes of the inner-magnetosphere as well as lead to a better understanding of extreme solar storm events for the future advancement of space weather physics.
Adapting the Transtheoretical Model of Change to the Bereavement Process
ERIC Educational Resources Information Center
Calderwood, Kimberly A.
2011-01-01
Theorists currently believe that bereaved people undergo some transformation of self rather than returning to their original state. To advance our understanding of this process, this article presents an adaptation of Prochaska and DiClemente's transtheoretical model of change as it could be applied to the journey that bereaved individuals…
The Function of Semantics in Automated Language Processing.
ERIC Educational Resources Information Center
Pacak, Milos; Pratt, Arnold W.
This paper is a survey of some of the major semantic models that have been developed for automated semantic analysis of natural language. Current approaches to semantic analysis and logical interference are based mainly on models of human cognitive processes such as Quillian's semantic memory, Simmon's Protosynthex III and others. All existing…
Self-Consistent Magnetosphere-Ionosphere Coupling and Associated Plasma Energization Processes
NASA Technical Reports Server (NTRS)
Khazanov, G. V.; Six, N. Frank (Technical Monitor)
2002-01-01
Magnetosphere-Ionosphere (MI) coupling and associated with this process electron and ion energization processes have interested scientists for decades and, in spite of experimental and theoretical research efforts, are still ones of the least well known dynamic processes in space plasma physics. The reason for this is that the numerous physical processes associated with MI coupling occur over multiple spatial lengths and temporal scales. One typical example of MI coupling is large scale ring current (RC) electrodynamic coupling that includes calculation of the magnetospheric electric field that is consistent with the ring current (RC) distribution. A general scheme for numerical simulation of such large-scale magnetosphere-ionosphere coupling processes has been presented earlier in many works. The mathematical formulation of these models are based on "modified frozen-in flux theorem" for an ensemble of adiabatically drifting particles in the magnetosphere. By tracking the flow of particles through the inner magnetosphere, the bounce-averaged phase space density of the hot ions and electrons can be reconstructed and the magnetospheric electric field can be calculated such that it is consistent with the particle distribution in the magnetosphere. The new a self-consistent ring current model has been developed that couples electron and ion magnetospheric dynamics with calculation of electric field. Two new features were taken into account in addition to the RC ions, we solve an electron kinetic equation in our model, self-consistently including these results in the solution. Second, using different analytical relationships, we calculate the height integrated ionospheric conductances as the function of precipitated high energy magnetospheric electrons and ions as produced by our model. This results in fundamental changes to the electric potential pattern in the inner magnetosphere, with a smaller Alfven boundary than previous potential formulations would predict but one consistent with recent satellite observations. This leads to deeper penetration of the plasma sheet ions and electrons into the inner magnetosphere and more effective ring current ions and electron energization.
NASA Astrophysics Data System (ADS)
Rezaei Ashtiani, Hamid Reza; Zarandooz, Roozbeh
2015-09-01
A 2D axisymmetric electro-thermo-mechanical finite element (FE) model is developed to investigate the effect of current intensity, welding time, and electrode tip diameter on temperature distributions and nugget size in resistance spot welding (RSW) process of Inconel 625 superalloy sheets using ABAQUS commercial software package. The coupled electro-thermal analysis and uncoupled thermal-mechanical analysis are used for modeling process. In order to improve accuracy of simulation, material properties including physical, thermal, and mechanical properties have been considered to be temperature dependent. The thickness and diameter of computed weld nuggets are compared with experimental results and good agreement is observed. So, FE model developed in this paper provides prediction of quality and shape of the weld nuggets and temperature distributions with variation of each process parameter, suitably. Utilizing this FE model assists in adjusting RSW parameters, so that expensive experimental process can be avoided. The results show that increasing welding time and current intensity lead to an increase in the nugget size and electrode indentation, whereas increasing electrode tip diameter decreases nugget size and electrode indentation.
The physicist's companion to current fluctuations: one-dimensional bulk-driven lattice gases
NASA Astrophysics Data System (ADS)
Lazarescu, Alexandre
2015-12-01
One of the main features of statistical systems out of equilibrium is the currents they exhibit in their stationary state: microscopic currents of probability between configurations, which translate into macroscopic currents of mass, charge, etc. Understanding the general behaviour of these currents is an important step towards building a universal framework for non-equilibrium steady states akin to the Gibbs-Boltzmann distribution for equilibrium systems. In this review, we consider one-dimensional bulk-driven particle gases, and in particular the asymmetric simple exclusion process (ASEP) with open boundaries, which is one of the most popular models of one-dimensional transport. We focus, in particular, on the current of particles flowing through the system in its steady state, and on its fluctuations. We show how one can obtain the complete statistics of that current, through its large deviation function, by combining results from various methods: exact calculation of the cumulants of the current, using the integrability of the model; direct diagonalization of a biased process in the limits of very high or low current; hydrodynamic description of the model in the continuous limit using the macroscopic fluctuation theory. We give a pedagogical account of these techniques, starting with a quick introduction to the necessary mathematical tools, as well as a short overview of the existing works relating to the ASEP. We conclude by drawing the complete dynamical phase diagram of the current. We also remark on a few possible generalizations of these results.
Models of recognition: A review of arguments in favor of a dual-process account
DIANA, RACHEL A.; REDER, LYNNE M.; ARNDT, JASON; PARK, HEEKYEONG
2008-01-01
The majority of computationally specified models of recognition memory have been based on a single-process interpretation, claiming that familiarity is the only influence on recognition. There is increasing evidence that recognition is, in fact, based on two processes: recollection and familiarity. This article reviews the current state of the evidence for dual-process models, including the usefulness of the remember/know paradigm, and interprets the relevant results in terms of the source of activation confusion (SAC) model of memory. We argue that the evidence from each of the areas we discuss, when combined, presents a strong case that inclusion of a recollection process is necessary. Given this conclusion, we also argue that the dual-process claim that the recollection process is always available is, in fact, more parsimonious than the single-process claim that the recollection process is used only in certain paradigms. The value of a well-specified process model such as the SAC model is discussed with regard to other types of dual-process models. PMID:16724763
New figuring model based on surface slope profile for grazing-incidence reflective optics
Zhou, Lin; Huang, Lei; Bouet, Nathalie; ...
2016-08-09
Surface slope profile is widely used in the metrology of grazing-incidence reflective optics instead of surface height profile. Nevertheless, the theoretical and experimental model currently used in deterministic optical figuring processes is based on surface height, not on surface slope. This means that the raw slope profile data from metrology need to be converted to height profile to perform the current height-based figuring processes. The inevitable measurement noise in the raw slope data will introduce significant cumulative error in the resultant height profiles. As a consequence, this conversion will degrade the determinism of the figuring processes, and will have anmore » impact on the ultimate surface figuring results. To overcome this problem, an innovative figuring model is proposed, which directly uses the raw slope profile data instead of the usual height data as input for the deterministic process. In this article, first the influence of the measurement noise on the resultant height profile is analyzed, and then a new model is presented; finally a demonstration experiment is carried out using a one-dimensional ion beam figuring process to demonstrate the validity of our approach.« less
Non-Hydrostatic Modelling of Waves and Currents over Subtle Bathymetric Features
NASA Astrophysics Data System (ADS)
Gomes, E.; Mulligan, R. P.; McNinch, J.
2014-12-01
Localized areas with high rates of shoreline erosion on beaches, referred to as erosional hotspots, can occur near clusters of relict shore-oblique sandbars. Wave transformation and wave-driven currents over these morphological features could provide an understanding of the hydrodynamic-morphologic coupling mechanism that connects them to the occurrence of erosional hotspots. To investigate this, we use the non-hydrostatic SWASH model that phase-resolves the free surface and fluid motions throughout the water column, allowing for high resolution of wave propagation and breaking processes. In this study we apply a coupled system of nested models including SWAN over a large domain of the North Carolina shelf with smaller nested SWASH domains in areas of interest to determine the hydrodynamic processes occurring over shore oblique bars. In this presentation we focus on a high resolution grid (10 vertical layers, 10 m horizontal resolution) applied to the Duck region with model validation from acoustic wave and current data, and observations from the Coastal Lidar And Radar Imaging System (CLARIS). By altering the bathymetry input for each model run based on bathymetric surveys and comparing the predicted and observed wave heights and current profiles, the effects of subtle bathymetric perturbations have on wave refraction, wave breaking, surf zone currents and vorticity are investigated. The ability to predict wave breaking and hydrodynamics with a non-hydrostatic model may improve our understanding of surf zone dynamics in relation to morphologic conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Qing; Gerhardt, Michael R.; Aziz, Michael J.
We measure the polarization characteristics of a quinone-bromide redox flow battery with interdigitated flow fields, using electrochemical impedance spectroscopy and voltammetry of a full cell and of a half cell against a reference electrode. We find linear polarization behavior at 50% state of charge all the way to the short-circuit current density of 2.5 A/cm 2. We uniquely identify the polarization area-specific resistance (ASR) of each electrode, the membrane ASR to ionic current, and the electronic contact ASR. We use voltage probes to deduce the electronic current density through each sheet of carbon paper in the quinone-bearing electrode. By alsomore » interpreting the results using the Newman 1-D porous electrode model, we deduce the volumetric exchange current density of the porous electrode. We uniquely evaluate the power dissipation and identify a correspondence to the contributions to the electrode ASR from the faradaic, electronic, and ionic transport processes. We find that, within the electrode, more power is dissipated in the faradaic process than in the electronic and ionic conduction processes combined, despite the observed linear polarization behavior. We examine the sensitivity of the ASR to the values of the model parameters. The greatest performance improvement is anticipated from increasing the volumetric exchange current density.« less
Novel opportunities for computational biology and sociology in drug discovery☆
Yao, Lixia; Evans, James A.; Rzhetsky, Andrey
2013-01-01
Current drug discovery is impossible without sophisticated modeling and computation. In this review we outline previous advances in computational biology and, by tracing the steps involved in pharmaceutical development, explore a range of novel, high-value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy–industry links for scientific and human benefit. Attention to these opportunities could promise punctuated advance and will complement the well-established computational work on which drug discovery currently relies. PMID:20349528
Sturgeon, John A; Zautra, Alex J
2013-03-01
Pain is a complex construct that contributes to profound physical and psychological dysfunction, particularly in individuals coping with chronic pain. The current paper builds upon previous research, describes a balanced conceptual model that integrates aspects of both psychological vulnerability and resilience to pain, and reviews protective and exacerbating psychosocial factors to the process of adaptation to chronic pain, including pain catastrophizing, pain acceptance, and positive psychological resources predictive of enhanced pain coping. The current paper identifies future directions for research that will further enrich the understanding of pain adaptation and espouses an approach that will enhance the ecological validity of psychological pain coping models, including introduction of advanced statistical and conceptual models that integrate behavioral, cognitive, information processing, motivational and affective theories of pain.
NASA Technical Reports Server (NTRS)
Siegfried, D. E.
1982-01-01
A quartz hollow tube cathode was used to determine the operating conditions within a mercury orificed hollow cathode. Insert temperature profiles, cathode current distributions, plasma properties profile, and internal pressure-mass flow rate results are summarized and used in a phenomenological model which qualitatively describes electron emission and plasma production processes taking place within the cathode. By defining an idealized ion production region within which most of the plasma processes are concentrated, this model is expressed analytically as a simple set of equations which relate cathode dimensions and specifiable operating conditions, such as mass flow rate and discharge current, to such important parameters as emission surface temperature and internal plasma properties. Key aspects of the model are examined.
How can model comparison help improving species distribution models?
Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle
2013-01-01
Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.
How Can Model Comparison Help Improving Species Distribution Models?
Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle
2013-01-01
Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagus sylvatica L., Quercus robur L. and Pinus sylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes. PMID:23874779
NASA Astrophysics Data System (ADS)
Guan, Xiaofei; Pal, Uday B.; Powell, Adam C.
2014-06-01
This paper reports a solid oxide membrane (SOM) electrolysis experiment using an LSM(La0.8Sr0.2MnO3-δ)-Inconel inert anode current collector for production of magnesium and oxygen directly from magnesium oxide at 1423 K (1150 °C). The electrochemical performance of the SOM cell was evaluated by means of various electrochemical techniques including electrochemical impedance spectroscopy, potentiodynamic scan, and electrolysis. Electronic transference numbers of the flux were measured to assess the magnesium dissolution in the flux during SOM electrolysis. The effects of magnesium solubility in the flux on the current efficiency and the SOM stability during electrolysis are discussed. An inverse correlation between the electronic transference number of the flux and the current efficiency of the SOM electrolysis was observed. Based on the experimental results, a new equivalent circuit of the SOM electrolysis process is presented. A general electrochemical polarization model of SOM process for magnesium and oxygen gas production is developed, and the maximum allowable applied potential to avoid zirconia dissociation is calculated as well. The modeling results suggest that a high electronic resistance of the flux and a relatively low electronic resistance of SOM are required to achieve membrane stability, high current efficiency, and high production rates of magnesium and oxygen.
Ayllón, Daniel; Grimm, Volker; Attinger, Sabine; Hauhs, Michael; Simmer, Clemens; Vereecken, Harry; Lischeid, Gunnar
2018-05-01
Terrestrial environmental systems are characterised by numerous feedback links between their different compartments. However, scientific research is organized into disciplines that focus on processes within the respective compartments rather than on interdisciplinary links. Major feedback mechanisms between compartments might therefore have been systematically overlooked so far. Without identifying these gaps, initiatives on future comprehensive environmental monitoring schemes and experimental platforms might fail. We performed a comprehensive overview of feedbacks between compartments currently represented in environmental sciences and explores to what degree missing links have already been acknowledged in the literature. We focused on process models as they can be regarded as repositories of scientific knowledge that compile findings of numerous single studies. In total, 118 simulation models from 23 model types were analysed. Missing processes linking different environmental compartments were identified based on a meta-review of 346 published reviews, model intercomparison studies, and model descriptions. Eight disciplines of environmental sciences were considered and 396 linking processes were identified and ascribed to the physical, chemical or biological domain. There were significant differences between model types and scientific disciplines regarding implemented interdisciplinary links. The most wide-spread interdisciplinary links were between physical processes in meteorology, hydrology and soil science that drive or set the boundary conditions for other processes (e.g., ecological processes). In contrast, most chemical and biological processes were restricted to links within the same compartment. Integration of multiple environmental compartments and interdisciplinary knowledge was scarce in most model types. There was a strong bias of suggested future research foci and model extensions towards reinforcing existing interdisciplinary knowledge rather than to open up new interdisciplinary pathways. No clear pattern across disciplines exists with respect to suggested future research efforts. There is no evidence that environmental research would clearly converge towards more integrated approaches or towards an overarching environmental systems theory. Copyright © 2017 Elsevier B.V. All rights reserved.
Signal processing system for electrotherapy applications
NASA Astrophysics Data System (ADS)
Płaza, Mirosław; Szcześniak, Zbigniew
2017-08-01
The system of signal processing for electrotherapeutic applications is proposed in the paper. The system makes it possible to model the curve of threshold human sensitivity to current (Dalziel's curve) in full medium frequency range (1kHz-100kHz). The tests based on the proposed solution were conducted and their results were compared with those obtained according to the assumptions of High Tone Power Therapy method and referred to optimum values. Proposed system has high dynamics and precision of mapping the curve of threshold human sensitivity to current and can be used in all methods where threshold curves are modelled.
A Ball Lightning Model as a Possible Explanation of Recently Reported Cavity Lights
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fryberger, David; /SLAC
The salient features of cavity lights, in particular, mobile luminous objects (MLO's), as have been experimentally observed in superconducting accelerator cavities, are summarized. A model based upon standard electromagnetic interactions between a small particle and the 1.5 GHz cavity excitation field is described. This model can explain some features of these data, in particular, the existence of particle orbits without wall contact. While this result is an important success for the model, it is detailed why the model as it stands is incomplete. It is argued that no avenues for a suitable extension of the model through established physics appearmore » evident, which motivates an investigation of a model based upon a more exotic object, ball lightning. As discussed, further motivation derives from the fact that there are significant similarities in many of the qualitative features of ball lightning and MLO's, even though they appear in quite different circumstances and differ in scale by orders of magnitude. The ball lightning model, which incorporates electromagnetic charges and currents, is based on a symmetrized set of Maxwell's equations in which the electromagnetic sources and fields are characterized by a process called dyality rotation. It is shown that a consistent mathematical description of dyality rotation as a physical process can be achieved by adding suitable (phenomenological) current terms to supplement the usual current terms in the symmetrized Maxwell's equations. These currents, which enable the conservation of electric and magnetic charge, are called vacuum currents. It is shown that the proposed ball lightning model offers a good qualitative explanation of the perplexing aspects of the MLO data. Avenues for further study are indicated.« less
Measuring the emergence of tobacco dependence: the contribution of negative reinforcement models.
Eissenberg, Thomas
2004-06-01
This review of negative reinforcement models of drug dependence is part of a series that takes the position that a complete understanding of current concepts of dependence will facilitate the development of reliable and valid measures of the emergence of tobacco dependence. Other reviews within the series consider models that emphasize positive reinforcement and social learning/cognitive models. This review summarizes negative reinforcement in general and then presents four current negative reinforcement models that emphasize withdrawal, classical conditioning, self-medication and opponent-processes. For each model, the paper outlines central aspects of dependence, conceptualization of dependence development and influences that the model might have on current and future measures of dependence. Understanding how drug dependence develops will be an important part of future successful tobacco dependence measurement, prevention and treatment strategies.
Ozone Lidar Observations for Air Quality Studies
NASA Technical Reports Server (NTRS)
Wang, Lihua; Newchurch, Mike; Kuang, Shi; Burris, John F.; Huang, Guanyu; Pour-Biazar, Arastoo; Koshak, William; Follette-Cook, Melanie B.; Pickering, Kenneth E.; McGee, Thomas J.;
2015-01-01
Tropospheric ozone lidars are well suited to measuring the high spatio-temporal variability of this important trace gas. Furthermore, lidar measurements in conjunction with balloon soundings, aircraft, and satellite observations provide substantial information about a variety of atmospheric chemical and physical processes. Examples of processes elucidated by ozone-lidar measurements are presented, and modeling studies using WRF-Chem, RAQMS, and DALES/LES models illustrate our current understanding and shortcomings of these processes.
NASA Technical Reports Server (NTRS)
Swickrath, Michael J.; Anderson, Molly
2012-01-01
Through the respiration process, humans consume oxygen (O2) while producing carbon dioxide (CO2) and water (H2O) as byproducts. For long term space exploration, CO2 concentration in the atmosphere must be managed to prevent hypercapnia. Moreover, CO2 can be used as a source of oxygen through chemical reduction serving to minimize the amount of oxygen required at launch. Reduction can be achieved through a number of techniques. NASA is currently exploring the Sabatier reaction, the Bosch reaction, and co- electrolysis of CO2 and H2O for this process. Proof-of-concept experiments and prototype units for all three processes have proven capable of returning useful commodities for space exploration. All three techniques have demonstrated the capacity to reduce CO2 in the laboratory, yet there is interest in understanding how all three techniques would perform at a system level within a spacecraft. Consequently, there is an impetus to develop predictive models for these processes that can be readily rescaled and integrated into larger system models. Such analysis tools provide the ability to evaluate each technique on a comparable basis with respect to processing rates. This manuscript describes the current models for the carbon dioxide reduction processes under parallel developmental efforts. Comparison to experimental data is provided were available for verification purposes.
Computational modeling of soot nucleation
NASA Astrophysics Data System (ADS)
Chung, Seung-Hyun
Recent studies indicate that soot is the second most significant driver of climate change---behind CO2, but ahead of methane---and increased levels of soot particles in the air are linked to health hazards such as heart disease and lung cancer. Within the soot formation process, soot nucleation is the least understood step, and current experimental findings are still limited. This thesis presents computational modeling studies of the major pathways of the soot nucleation process. In this study, two regimes of soot nucleation---chemical growth and physical agglomeration---were evaluated and the results demonstrated that combustion conditions determine the relative importance of these two routes. Also, the dimerization process of polycyclic aromatic hydrocarbons, which has been regarded as one of the most important physical agglomeration processes in soot formation, was carefully examined with a new method for obtaining the nucleation rate using molecular dynamics simulation. The results indicate that the role of pyrene dimerization, which is the commonly accepted model, is expected to be highly dependent on various flame temperature conditions and may not be a key step in the soot nucleation process. An additional pathway, coronene dimerization in this case, needed to be included to improve the match with experimental data. The results of this thesis provide insight on the soot nucleation process and can be utilized to improve current soot formation models.
A review of clinical decision making: models and current research.
Banning, Maggi
2008-01-01
The aim of this paper was to review the current literature clinical decision-making models and the educational application of models to clinical practice. This was achieved by exploring the function and related research of the three available models of clinical decision making: information-processing model, the intuitive-humanist model and the clinical decision-making model. Clinical decision making is a unique process that involves the interplay between knowledge of pre-existing pathological conditions, explicit patient information, nursing care and experiential learning. Historically, two models of clinical decision making are recognized from the literature; the information-processing model and the intuitive-humanist model. The usefulness and application of both models has been examined in relation the provision of nursing care and care related outcomes. More recently a third model of clinical decision making has been proposed. This new multidimensional model contains elements of the information-processing model but also examines patient specific elements that are necessary for cue and pattern recognition. Literature review. Evaluation of the literature generated from MEDLINE, CINAHL, OVID, PUBMED and EBESCO systems and the Internet from 1980 to November 2005. The characteristics of the three models of decision making were identified and the related research discussed. Three approaches to clinical decision making were identified, each having its own attributes and uses. The most recent addition to the clinical decision making is a theoretical, multidimensional model which was developed through an evaluation of current literature and the assessment of a limited number of research studies that focused on the clinical decision-making skills of inexperienced nurses in pseudoclinical settings. The components of this model and the relative merits to clinical practice are discussed. It is proposed that clinical decision making improves as the nurse gains experience of nursing patients within a specific speciality and with experience, nurses gain a sense of saliency in relation to decision making. Experienced nurses may use all three forms of clinical decision making both independently and concurrently to solve nursing-related problems. It is suggested that O'Neill's clinical decision-making model could be tested by educators and experienced nurses to assess the efficacy of this hybrid approach to decision making.
Iwasaki, Shinsuke; Isobe, Atsuhiko; Kako, Shin'ichiro; Uchida, Keiichi; Tokai, Tadashi
2017-08-15
A numerical model was established to reproduce the oceanic transport processes of microplastics and mesoplastics in the Sea of Japan. A particle tracking model, where surface ocean currents were given by a combination of a reanalysis ocean current product and Stokes drift computed separately by a wave model, simulated particle movement. The model results corresponded with the field survey. Modeled results indicated the micro- and mesoplastics are moved northeastward by the Tsushima Current. Subsequently, Stokes drift selectively moves mesoplastics during winter toward the Japanese coast, resulting in increased contributions of mesoplastics south of 39°N. Additionally, Stokes drift also transports micro- and mesoplastics out to the sea area south of the subpolar front where the northeastward Tsushima Current carries them into the open ocean via the Tsugaru and Soya straits. Average transit time of modeled particles in the Sea of Japan is drastically reduced when including Stokes drift in the model. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Modeling tidal hydrodynamics of San Diego Bay, California
Wang, P.-F.; Cheng, R.T.; Richter, K.; Gross, E.S.; Sutton, D.; Gartner, J.W.
1998-01-01
In 1983, current data were collected by the National Oceanic and Atmospheric Administration using mechanical current meters. During 1992 through 1996, acoustic Doppler current profilers as well as mechanical current meters and tide gauges were used. These measurements not only document tides and tidal currents in San Diego Bay, but also provide independent data sets for model calibration and verification. A high resolution (100-m grid), depth-averaged, numerical hydrodynamic model has been implemented for San Diego Bay to describe essential tidal hydrodynamic processes in the bay. The model is calibrated using the 1983 data set and verified using the more recent 1992-1996 data. Discrepancies between model predictions and field data in beth model calibration and verification are on the order of the magnitude of uncertainties in the field data. The calibrated and verified numerical model has been used to quantify residence time and dilution and flushing of contaminant effluent into San Diego Bay. Furthermore, the numerical model has become an important research tool in ongoing hydrodynamic and water quality studies and in guiding future field data collection programs.
Morphodynamic Model of Submarine Canyon Incision by Sandblasting
NASA Astrophysics Data System (ADS)
Zhang, L.; Parker, G.; Izumi, N.; Cartigny, M.; Li, T.; Wang, G.
2017-12-01
Submarine canyons are carved by turbidity currents under the deep sea. As opposed to subaerial canyons, the relevant processes are not easy to observe directly. Turbidity currents are bottom-hugging sediment gravity flows of that can incise or deposit on the seafloor to create submarine canyons or fans. The triggers of turbidity currents can be storms, edge waves, internal waves, canyon wall sapping, delta failure, breaching and hyperpycnal flows. The formation and evolution mechanisms of submarine canyons are similar to those of subaerial canyons, but have substantial differences. For example, sandblasting, rather than wear due to colliding gravel clasts is more likely to be the mechanism of bedrock incision. Submarine canyons incise downward, and often develop meander bends and levees within the canyon, so defining "fairways". Here we propose a simple model for canyon incision. The starting point of our model is the Macro Roughness Saltation Abrasion Alluviation model of Zhang et al. [2015], designed for bedrock incision by gravel clasts in mixed bedrock-alluvial rivers. We adapt this formulation to consider sandblasting as a means of wear. We use a layer-averaged model for turbidity current dynamics. The current contains a mixture of mud, which helps drive the flow but which does not cause incision, and sand, which is the agent of incision. We show that the model can successfully model channel downcutting, and indeed illustrate the early formation of net incisional cyclic steps, i.e. upstream-migrating undulations on the bed associated with transcritical (in the Froude sense) flow. These steps can be expected to abet the process of incision.
Elson, Edward
2009-01-01
A theory of control of cellular proliferation and differentiation in the early development of metazoan systems, postulating a system of electrical controls "parallel" to the processes of molecular biochemistry, is presented. It is argued that the processes of molecular biochemistry alone cannot explain how a developing organism defies a stochastic universe. The demonstration of current flow (charge transfer) along the long axis of DNA through the base-pairs (the "pi-way) in vitro raises the question of whether nature may employ such current flows for biological purposes. Such currents might be too small to be accessible to direct measurement in vivo but conduction has been measured in vitro, and the methods might well be extended to living systems. This has not been done because there is no reasonable model which could stimulate experimentation. We suggest several related, but detachable or independent, models for the biological utility of charge transfer, whose scope admittedly outruns current concepts of thinking about organization, growth, and development in eukaryotic, metazoan systems. The ideas are related to explanations proposed to explain the effects demonstrated on tumors and normal tissues described in Article I (this issue). Microscopic and mesoscopic potential fields and currents are well known at sub-cellular, cellular, and organ systems levels. Not only are such phenomena associated with internal cellular membranes in bioenergetics and information flow, but remarkable long-range fields over tissue interfaces and organs appear to play a role in embryonic development (Nuccitelli, 1992 ). The origin of the fields remains unclear and is the subject of active investigation. We are proposing that similar processes could play a vital role at a "sub-microscopic level," at the level of the chromosomes themselves, and could play a role in organizing and directing fundamental processes of growth and development, in parallel with the more discernible fields and currents described.
NASA Astrophysics Data System (ADS)
Anderson, Brian J.; Korth, Haje; Welling, Daniel T.; Merkin, Viacheslav G.; Wiltberger, Michael J.; Raeder, Joachim; Barnes, Robin J.; Waters, Colin L.; Pulkkinen, Antti A.; Rastaetter, Lutz
2017-02-01
Two of the geomagnetic storms for the Space Weather Prediction Center Geospace Environment Modeling challenge occurred after data were first acquired by the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE). We compare Birkeland currents from AMPERE with predictions from four models for the 4-5 April 2010 and 5-6 August 2011 storms. The four models are the Weimer (2005b) field-aligned current statistical model, the Lyon-Fedder-Mobarry magnetohydrodynamic (MHD) simulation, the Open Global Geospace Circulation Model MHD simulation, and the Space Weather Modeling Framework MHD simulation. The MHD simulations were run as described in Pulkkinen et al. (2013) and the results obtained from the Community Coordinated Modeling Center. The total radial Birkeland current, ITotal, and the distribution of radial current density, Jr, for all models are compared with AMPERE results. While the total currents are well correlated, the quantitative agreement varies considerably. The Jr distributions reveal discrepancies between the models and observations related to the latitude distribution, morphologies, and lack of nightside current systems in the models. The results motivate enhancing the simulations first by increasing the simulation resolution and then by examining the relative merits of implementing more sophisticated ionospheric conductance models, including ionospheric outflows or other omitted physical processes. Some aspects of the system, including substorm timing and location, may remain challenging to simulate, implying a continuing need for real-time specification.
NASA Astrophysics Data System (ADS)
Jiang, Dongdong; Du, Jinmei; Gu, Yan; Feng, Yujun
2012-05-01
By assuming a relaxation process for depolarization associated with the ferroelectric (FE) to antiferroelectric (AFE) phase transition in Pb0.99Nb0.02(Zr0.95Ti0.05)0.98O3 ferroelectric ceramics under shock wave compression, we build a new model for the depoling current, which is different from both the traditional constant current source (CCS) model and the phase transition kinetics (PTK) model. The characteristic relaxation time and new-equilibrated polarization are dependent on both the shock pressure and electric field. After incorporating a Maxwell s equation, the relaxation model developed applies to all the depoling currents under short-circuit condition and high-impedance condition. Influences of shock pressure, load resistance, dielectric property, and electrical conductivity on the depoling current are also discussed. The relaxation model gives a good description about the suppressing effect of the self-generated electric field on the FE-to-AFE phase transition at low shock pressures, which cannot be described by the traditional models. After incorporating a time- and electric-field-dependent repolarization, this model predicts that the high-impedance current eventually becomes higher than the short-circuit current, which is consistent with the experimental results in the literature. Finally, we make the comparison between our relaxation model and the traditional CCS model and PTK model.
New Approaches to Parameterizing Convection
NASA Technical Reports Server (NTRS)
Randall, David A.; Lappen, Cara-Lyn
1999-01-01
Many general circulation models (GCMs) currently use separate schemes for planetary boundary layer (PBL) processes, shallow and deep cumulus (Cu) convection, and stratiform clouds. The conventional distinctions. among these processes are somewhat arbitrary. For example, in the stratocumulus-to-cumulus transition region, stratocumulus clouds break up into a combination of shallow cumulus and broken stratocumulus. Shallow cumulus clouds may be considered to reside completely within the PBL, or they may be regarded as starting in the PBL but terminating above it. Deeper cumulus clouds often originate within the PBL with also can originate aloft. To the extent that our models separately parameterize physical processes which interact strongly on small space and time scales, the currently fashionable practice of modularization may be doing more harm than good.
Del Rio-Chanona, Ehecatl A; Liu, Jiao; Wagner, Jonathan L; Zhang, Dongda; Meng, Yingying; Xue, Song; Shah, Nilay
2018-02-01
Biodiesel produced from microalgae has been extensively studied due to its potentially outstanding advantages over traditional transportation fuels. In order to facilitate its industrialization and improve the process profitability, it is vital to construct highly accurate models capable of predicting the complex behavior of the investigated biosystem for process optimization and control, which forms the current research goal. Three original contributions are described in this paper. Firstly, a dynamic model is constructed to simulate the complicated effect of light intensity, nutrient supply and light attenuation on both biomass growth and biolipid production. Secondly, chlorophyll fluorescence, an instantly measurable variable and indicator of photosynthetic activity, is embedded into the model to monitor and update model accuracy especially for the purpose of future process optimal control, and its correlation between intracellular nitrogen content is quantified, which to the best of our knowledge has never been addressed so far. Thirdly, a thorough experimental verification is conducted under different scenarios including both continuous illumination and light/dark cycle conditions to testify the model predictive capability particularly for long-term operation, and it is concluded that the current model is characterized by a high level of predictive capability. Based on the model, the optimal light intensity for algal biomass growth and lipid synthesis is estimated. This work, therefore, paves the way to forward future process design and real-time optimization. © 2017 Wiley Periodicals, Inc.
Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O
2017-08-01
Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.
Process Correlation Analysis Model for Process Improvement Identification
Park, Sooyong
2014-01-01
Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data. PMID:24977170
Process correlation analysis model for process improvement identification.
Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong
2014-01-01
Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.
Growth and characterization of high current density, high-speed InAs/AlSb resonant tunneling diodes
NASA Technical Reports Server (NTRS)
Soderstrom, J. R.; Brown, E. R.; Parker, C. D.; Mahoney, L. J.; Yao, J. Y.
1991-01-01
InAs/AlSb double-barrier resonant tunneling diodes with peak current densities up to 370,000 A/sq cm and high peak-to-valley current ratios of 3.2 at room temperature have been fabricated. The peak current density is well-explained by a stationary-state transport model with the two-band envelope function approximation. The valley current density predicted by this model is less than the experimental value by a factor that is typical of the discrepancy found in other double-barrier structures. It is concluded that threading dislocations are largely inactive in the resonant tunneling process.
Extending BPM Environments of Your Choice with Performance Related Decision Support
NASA Astrophysics Data System (ADS)
Fritzsche, Mathias; Picht, Michael; Gilani, Wasif; Spence, Ivor; Brown, John; Kilpatrick, Peter
What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools, process optimizations or a combination of such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.
General Information: Chapman Conference on Magnetospheric Current Systems
NASA Technical Reports Server (NTRS)
Spicer, Daniel S.; Curtis, Steven
1999-01-01
The goal of this conference is to address recent achievements of observational, computational, theoretical, and modeling studies, and to foster communication among people working with different approaches. Electric current systems play an important role in the energetics of the magnetosphere. This conference will target outstanding issues related to magnetospheric current systems, placing its emphasis on interregional processes and driving mechanisms of current systems.
Dunham, Kylee; Grand, James B.
2016-10-11
The Alaskan breeding population of Steller’s eiders (Polysticta stelleri) was listed as threatened under the Endangered Species Act in 1997 in response to perceived declines in abundance throughout their breeding and nesting range. Aerial surveys suggest the breeding population is small and highly variable in number, with zero birds counted in 5 of the last 25 years. Research was conducted to evaluate competing population process models of Alaskan-breeding Steller’s eiders through comparison of model projections to aerial survey data. To evaluate model efficacy and estimate demographic parameters, a Bayesian state-space modeling framework was used and each model was fit to counts from the annual aerial surveys, using sequential importance sampling and resampling. The results strongly support that the Alaskan breeding population experiences population level nonbreeding events and is open to exchange with the larger Russian-Pacific breeding population. Current recovery criteria for the Alaskan breeding population rely heavily on the ability to estimate population viability. The results of this investigation provide an informative model of the population process that can be used to examine future population states and assess the population in terms of the current recovery and reclassification criteria.
NASA Astrophysics Data System (ADS)
Lion, Alexander; Mittermeier, Christoph; Johlitz, Michael
2017-09-01
A novel approach to represent the glass transition is proposed. It is based on a physically motivated extension of the linear viscoelastic Poynting-Thomson model. In addition to a temperature-dependent damping element and two linear springs, two thermal strain elements are introduced. In order to take the process dependence of the specific heat into account and to model its characteristic behaviour below and above the glass transition, the Helmholtz free energy contains an additional contribution which depends on the temperature history and on the current temperature. The model describes the process-dependent volumetric and caloric behaviour of glass-forming materials, and defines a functional relationship between pressure, volumetric strain, and temperature. If a model for the isochoric part of the material behaviour is already available, for example a model of finite viscoelasticity, the caloric and volumetric behaviour can be represented with the current approach. The proposed model allows computing the isobaric and isochoric heat capacities in closed form. The difference c_p -c_v is process-dependent and tends towards the classical expression in the glassy and equilibrium ranges. Simulations and theoretical studies demonstrate the physical significance of the model.
NASA Technical Reports Server (NTRS)
Amundsen, R. M.; Feldhaus, W. S.; Little, A. D.; Mitchum, M. V.
1995-01-01
Electronic integration of design and analysis processes was achieved and refined at Langley Research Center (LaRC) during the development of an optical bench for a laser-based aerospace experiment. Mechanical design has been integrated with thermal, structural and optical analyses. Electronic import of the model geometry eliminates the repetitive steps of geometry input to develop each analysis model, leading to faster and more accurate analyses. Guidelines for integrated model development are given. This integrated analysis process has been built around software that was already in use by designers and analysis at LaRC. The process as currently implemented used Pro/Engineer for design, Pro/Manufacturing for fabrication, PATRAN for solid modeling, NASTRAN for structural analysis, SINDA-85 and P/Thermal for thermal analysis, and Code V for optical analysis. Currently, the only analysis model to be built manually is the Code V model; all others can be imported for the Pro/E geometry. The translator from PATRAN results to Code V optical analysis (PATCOD) was developed and tested at LaRC. Directions for use of the translator or other models are given.
Grossberg, Stephen
2009-01-01
An intimate link exists between the predictive and learning processes in the brain. Perceptual/cognitive and spatial/motor processes use complementary predictive mechanisms to learn, recognize, attend and plan about objects in the world, determine their current value, and act upon them. Recent neural models clarify these mechanisms and how they interact in cortical and subcortical brain regions. The present paper reviews and synthesizes data and models of these processes, and outlines a unified theory of predictive brain processing. PMID:19528003
Qumquad: a UML-based approach for remodeling of legacy systems in health care.
Garde, Sebastian; Knaup, Petra; Herold, Ralf
2003-07-01
Health care information systems still comprise legacy systems to a certain extent. For reengineering legacy systems a thorough remodeling is inalienable. Current modeling techniques like the Unified Modeling Language (UML) do not offer a systematic and comprehensive process-oriented method for remodeling activities. We developed a systematic method for remodeling legacy systems in health care called Qumquad. Qumquad consists of three major steps: (i) modeling the actual state of the application system, (ii) systematic identification of weak points in this model and (iii) development of a target concept for the reimplementation considering the identified weak points. We applied Qumquad for remodeling a documentation and therapy planning system for pediatric oncology (DOSPO). As a result of our remodeling activities we regained an abstract model of the system, an analysis of the current weak points of DOSPO and possible (partly alternative) solutions to overcome the weak points. Qumquad proved to be very helpful in the reengineering process of DOSPO since we now have at our disposal a comprehensive model for the reimplementation of DOSPO that current users of the system agree on. Qumquad can easily be applied to other reengineering projects in health care.
Atmospheric, Climatic, and Environmental Research
NASA Technical Reports Server (NTRS)
Broecker, Wallace S.; Gornitz, Vivien M.
1994-01-01
The climate and atmospheric modeling project involves analysis of basic climate processes, with special emphasis on studies of the atmospheric CO2 and H2O source/sink budgets and studies of the climatic role Of CO2, trace gases and aerosols. These studies are carried out, based in part on use of simplified climate models and climate process models developed at GISS. The principal models currently employed are a variable resolution 3-D general circulation model (GCM), and an associated "tracer" model which simulates the advection of trace constituents using the winds generated by the GCM.
Inhibition: Mental Control Process or Mental Resource?
ERIC Educational Resources Information Center
Im-Bolter, Nancie; Johnson, Janice; Ling, Daphne; Pascual-Leone, Juan
2015-01-01
The current study tested 2 models of inhibition in 45 children with language impairment and 45 children with normally developing language; children were aged 7 to 12 years. Of interest was whether a model of inhibition as a mental-control process (i.e., executive function) or as a mental resource would more accurately reflect the relations among…
A Structural Equation Model of the Writing Process in Typically-Developing Sixth Grade Children
ERIC Educational Resources Information Center
Koutsoftas, Anthony D.; Gray, Shelley
2013-01-01
The purpose of this study was to evaluate how sixth grade children planned, translated, and revised written narrative stories using a task reflecting current instructional and assessment practices. A modified version of the Hayes and Flower (1980) writing process model was used as the theoretical framework for the study. Two hundred one…
Wen J. Wang; Hong S. He; Frank R. Thompson; Martin A. Spetich; Jacob S. Fraser
2018-01-01
Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are notwell represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts.We investigate how species biological...
Hénault-Ethier, Louise; Martin, Jean-Philippe; Housset, Johann
2017-08-01
A dynamic systems model of organic waste management for the province of Quebec, Canada, was built. Six distinct modules taking into account social, economical and environmental issues and perspectives were included. Five scenarios were designed and tested to identify the potential consequences of different governmental and demographic combinations of decisions over time. Among these scenarios, one examines Quebec's organic waste management policy (2011-2015), while the other scenarios represent business as usual or emphasize ecology, economy or social benefits in the decision-making process. Model outputs suggest that the current governmental policy should yield favorable environmental benefits, energy production and waste valorization. The projections stemming from the current policy action plan approach the benefits gained by another scenario emphasizing the environmental aspects in the decision-making process. As expected, without the current policy and action plan in place, or business as usual, little improvements are expected in waste management compared to current trends, and strictly emphasizing economic imperatives does not favor sustainable organic waste management. Copyright © 2017. Published by Elsevier Ltd.
Karimi, Leila; Ghassemi, Abbas
2016-07-01
Among the different technologies developed for desalination, the electrodialysis/electrodialysis reversal (ED/EDR) process is one of the most promising for treating brackish water with low salinity when there is high risk of scaling. Multiple researchers have investigated ED/EDR to optimize the process, determine the effects of operating parameters, and develop theoretical/empirical models. Previously published empirical/theoretical models have evaluated the effect of the hydraulic conditions of the ED/EDR on the limiting current density using dimensionless numbers. The reason for previous studies' emphasis on limiting current density is twofold: 1) to maximize ion removal, most ED/EDR systems are operated close to limiting current conditions if there is not a scaling potential in the concentrate chamber due to a high concentration of less-soluble salts; and 2) for modeling the ED/EDR system with dimensionless numbers, it is more accurate and convenient to use limiting current density, where the boundary layer's characteristics are known at constant electrical conditions. To improve knowledge of ED/EDR systems, ED/EDR models should be also developed for the Ohmic region, where operation reduces energy consumption, facilitates targeted ion removal, and prolongs membrane life compared to limiting current conditions. In this paper, theoretical/empirical models were developed for ED/EDR performance in a wide range of operating conditions. The presented ion removal and selectivity models were developed for the removal of monovalent ions and divalent ions utilizing the dominant dimensionless numbers obtained from laboratory scale electrodialysis experiments. At any system scale, these models can predict ED/EDR performance in terms of monovalent and divalent ion removal. Copyright © 2016 Elsevier Ltd. All rights reserved.
Integrated Main Propulsion System Performance Reconstruction Process/Models
NASA Technical Reports Server (NTRS)
Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael
2013-01-01
The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.
Improving measurement technology for the design of sustainable cities
NASA Astrophysics Data System (ADS)
Pardyjak, Eric R.; Stoll, Rob
2017-09-01
This review identifies and discusses measurement technology gaps that are currently preventing major science leaps from being realized in the study of urban environmental transport processes. These scientific advances are necessary to better understand the links between atmospheric transport processes in the urban environment, human activities, and potential management strategies. We propose that with various improved and targeted measurements, it will be possible to provide technically sound guidance to policy and decision makers for the design of sustainable cities. This review focuses on full-scale in situ and remotely sensed measurements of atmospheric winds, temperature, and humidity in cities and links measurements to current modeling and simulation needs. A key conclusion of this review is that there is a need for urban-specific measurement techniques including measurements of highly-resolved three-dimensional fields at sampling frequencies high enough to capture small-scale turbulence processes yet also capable of covering spatial extents large enough to simultaneously capture key features of urban heterogeneity and boundary layer processes while also supporting the validation of current and emerging modeling capabilities.
The need for conducting forensic analysis of decommissioned bridges.
DOT National Transportation Integrated Search
2014-01-01
A limiting factor in current bridge management programs is a lack of detailed knowledge of bridge deterioration : mechanisms and processes. The current state of the art is to predict future condition using statistical forecasting : models based upon ...
A unified engineering model of the first stroke in downward negative lightning
NASA Astrophysics Data System (ADS)
Nag, Amitabh; Rakov, Vladimir A.
2016-03-01
Each stroke in a negative cloud-to-ground lightning flash is composed of downward leader and upward return stroke processes, which are usually modeled individually. The first stroke leader is stepped and starts with preliminary breakdown (PB) which is often viewed as a separate process. We present the first unified engineering model for computing the electric field produced by a sequence of PB, stepped leader, and return stroke processes, serving to transport negative charge to ground. We assume that a negatively charged channel extends downward in a stepped fashion during both the PB and leader stages. Each step involves a current wave that propagates upward along the newly formed channel section. Once the leader attaches to ground, an upward propagating return stroke neutralizes the charge deposited along the channel. Model-predicted electric fields are in reasonably good agreement with simultaneous measurements at both near (hundreds of meters, electrostatic field component is dominant) and far (tens of kilometers, radiation field component is dominant) distances from the lightning channel. Relations between the features of computed electric field waveforms and model input parameters are examined. It appears that peak currents associated with PB pulses are similar to return stroke peak currents, and the observed variation of electric radiation field peaks produced by leader steps at different heights above ground is influenced by the ground corona space charge.
Attachment in Middle Childhood: Associations with Information Processing
ERIC Educational Resources Information Center
Zimmermann, Peter; Iwanski, Alexandra
2015-01-01
Attachment theory suggests that internal working models of self and significant others influence adjustment during development by controlling information processing and self-regulation. We provide a conceptual overview on possible mechanisms linking attachment and information processing and review the current literature in middle childhood.…
Modeling pedogenesis at multimillennium timescales: achievements and challenges
NASA Astrophysics Data System (ADS)
Finke, Peter
2013-04-01
The modeling of soil genesis is a particular case of modeling vadose zone processes, because of the variety in processes to be considered and its large (multimillennium) temporal extent. The particular relevancy of pedogenetic modeling for non-pedologists is that it involves the soil compartment carbon cycle. As most of these processes are driven by water flow, modeling hydrological processes is an inevitable component of (non-empirical) modeling of soil genesis. One particular challenge is that both slow and fast pedogenetic processes need to be accounted for. This overview summarizes the state of the art in this new branch of pedology, achievements made so far and current challenges, and is largely based on one particular pedon-scale soil evolution model, SoilGen. SoilGen is essentially a pedon-scale solute transport model that simulates unsaturated water flow, chemical equilibriums of various species with calcite, gypsum and gibbsite as precipitated phases, an exchange phase of Na, K, Ca, Mg, H and Al on clay and organic matter and a solution phase comprising various cations and anions. Additionally, a number of pedogenetic processes are simulated: C-cycling, chemical weathering of primary minerals, physical weathering of soil particles, bioturbation and clay migration. The model was applied onto a climosequence, a chronosequence, a toposequence and as part of a spatio-temporal soilscape reconstruction. Furthermore, the clay migration component has been calibrated and tested and so has the organic matter decomposition component. Quantitative comparisons between simulations and measurements resulted in the identification of possible improvements in the model and associated inputs, identified problems to be solved and identified the current application domain. Major challenges for process-based modeling in the vadose zone at multimillennium timescales can be divided into 4 groups: (i) Reconstruction of initial and boundary conditions; (ii) Accounting for evolution in soil properties such as soil texture and soil structure; (iii) Developing adequate calibration techniques; (iv) Maximizing computational efficiency. Reconstruction of initial and boundary conditions requires multidisciplinary inputs either derived from proxies or from combined vegetation and climate development models. So far, the combination of pedogenetic models and combined vegetation/climate models is rare. At pedogenetic timescales, soil characteristics that are usually considered constant become dynamic: texture, OC, bulk density, precipitated salts, minerals, etc. Interactions and feedbacks between these characteristics and associated hydrological properties need attention, e.g. via pedotransfer functions. The same can be stated for the development of soil structure and associated preferential flow, which is still a challenge. At multimillennium temporal extents, the combination of long model runtime and the fact that most calibration data represent the current stage of soil development requires a special approach. Model performance can be evaluated at various timescales using unconventional proxies. Finally, recognizing the fact that matter redistribution at the landscape scale is of paramount importance at multimillennium extent requires the formulation of computationally efficient 3D models. This will surely involve analysis of the tradeoff between process detail, model accuracy, required boundary inputs and model runtime.
Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A
2010-05-01
Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.
Melt-processing high-T{sub c} superconductors under an elevated magnetic field [Final report no. 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
John B. Vander Sande
2001-09-05
This report presents models for crystallographic texture development for high temperature superconducting oxides processed in the absence of a magnetic field and in the presence of a high magnetic field. The results of the models are confirmed through critical experiments. Processing thick films and tapes of high temperature superconducting oxides under a high magnetic field (5-10T) improves the critical current density exhibited.
A fuzzy model for assessing risk of occupational safety in the processing industry.
Tadic, Danijela; Djapan, Marko; Misita, Mirjana; Stefanovic, Miladin; Milanovic, Dragan D
2012-01-01
Managing occupational safety in any kind of industry, especially in processing, is very important and complex. This paper develops a new method for occupational risk assessment in the presence of uncertainties. Uncertain values of hazardous factors and consequence frequencies are described with linguistic expressions defined by a safety management team. They are modeled with fuzzy sets. Consequence severities depend on current hazardous factors, and their values are calculated with the proposed procedure. The proposed model is tested with real-life data from fruit processing firms in Central Serbia.
Assimilating the Future for Better Forecasts and Earlier Warnings
NASA Astrophysics Data System (ADS)
Du, H.; Wheatcroft, E.; Smith, L. A.
2016-12-01
Multi-model ensembles have become popular tools to account for some of the uncertainty due to model inadequacy in weather and climate simulation-based predictions. The current multi-model forecasts focus on combining single model ensemble forecasts by means of statistical post-processing. Assuming each model is developed independently or with different primary target variables, each is likely to contain different dynamical strengths and weaknesses. Using statistical post-processing, such information is only carried by the simulations under a single model ensemble: no advantage is taken to influence simulations under the other models. A novel methodology, named Multi-model Cross Pollination in Time, is proposed for multi-model ensemble scheme with the aim of integrating the dynamical information regarding the future from each individual model operationally. The proposed approach generates model states in time via applying data assimilation scheme(s) to yield truly "multi-model trajectories". It is demonstrated to outperform traditional statistical post-processing in the 40-dimensional Lorenz96 flow. Data assimilation approaches are originally designed to improve state estimation from the past to the current time. The aim of this talk is to introduce a framework that uses data assimilation to improve model forecasts at future time (not to argue for any one particular data assimilation scheme). Illustration of applying data assimilation "in the future" to provide early warning of future high-impact events is also presented.
On tridimensional rip current modeling
NASA Astrophysics Data System (ADS)
Marchesiello, Patrick; Benshila, Rachid; Almar, Rafael; Uchiyama, Yusuke; McWilliams, James C.; Shchepetkin, Alexander
2015-12-01
Do lateral shear instabilities of nearshore circulation account for a substantial part of Very Low-Frequency (VLF) variability? If yes, it would promote stirring and mixing of coastal waters and surf-shelf exchanges. Another question is whether tridimensional transient processes are important for instability generation. An innovative modeling system with tridimensional wave-current interactions was designed to investigate transient nearshore currents and interactions between nearshore and innershelf circulations. We present here some validation of rip current modeling for the Aquitanian coast of France, using in-situ and remote video sensing. We then proceed to show the benefits of 3D versus 2D (depth-mean flow) modeling of rip currents and their low-frequency variability. It appears that a large part of VLF motions is due to intrinsic variability of the tridimensional flow. 3D models may thus provide a valuable, only marginally more expensive alternative to conventional 2D approaches that miss the vertical flow structure and its nonlinear interaction with the depth-averaged flow.
9th Annual Science and Engineering Technology Conference
2008-04-17
Disks Composite Technology Titanium Aluminides Processing Microstructure Properties Curve Generator Go-Forward: Integrated Materials & Process Models...Initiatives Current DPA/T3s: Atomic Layer Deposition Hermetic Coatings: ...domestic ALD for electronic components; transition to fabrication process ...Production windows estim • Process capability fully established >Production specifications in place >Supply chain established •All necessary property
Model-Driven Useware Engineering
NASA Astrophysics Data System (ADS)
Meixner, Gerrit; Seissler, Marc; Breiner, Kai
User-oriented hardware and software development relies on a systematic development process based on a comprehensive analysis focusing on the users' requirements and preferences. Such a development process calls for the integration of numerous disciplines, from psychology and ergonomics to computer sciences and mechanical engineering. Hence, a correspondingly interdisciplinary team must be equipped with suitable software tools to allow it to handle the complexity of a multimodal and multi-device user interface development approach. An abstract, model-based development approach seems to be adequate for handling this complexity. This approach comprises different levels of abstraction requiring adequate tool support. Thus, in this chapter, we present the current state of our model-based software tool chain. We introduce the use model as the core model of our model-based process, transformation processes, and a model-based architecture, and we present different software tools that provide support for creating and maintaining the models or performing the necessary model transformations.
Effect of a Second, Parallel Capacitor on the Performance of a Pulse Inductive Plasma Thruster
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.; Balla, Joseph V.
2010-01-01
Pulsed inductive plasma accelerators are electrodeless space propulsion devices where a capacitor is charged to an initial voltage and is then discharged through an inductive coil that couples energy into the propellant, ionizing and accelerating it to produce thrust. A model that employs a set of circuit equations (as illustrated in Fig. 1a) coupled to a one-dimensional momentum equation has been previously used by Lovberg and Dailey [1] and Polzin et al. [2-4] to model the plasma acceleration process in pulsed inductive thrusters. In this paper an extra capacitor, inductor, and resistor are added to the system in the manner illustrated in the schematic shown in Fig. 1b. If the second capacitor has a smaller value than the initially charged capacitor, it can serve to increase the current rise rate through the inductive coil. Increasing the current rise rate should serve to better ionize the propellant. The equation of motion is solved to find the effect of an increased current rise rate on the acceleration process. We examine the tradeoffs between enhancing the breakdown process (increasing current rise rate) and altering the plasma acceleration process. These results provide insight into the performance of modified circuits in an inductive thruster, revealing how this design permutation can affect an inductive thruster's performance.
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
NASA Astrophysics Data System (ADS)
Qu, Zilian; Meng, Yonggang; Zhao, Qian
2015-03-01
This paper proposes a new eddy current method, named equivalent unit method (EUM), for the thickness measurement of the top copper film of multilayer interconnects in the chemical mechanical polishing (CMP) process, which is an important step in the integrated circuit (IC) manufacturing. The influence of the underneath circuit layers on the eddy current is modeled and treated as an equivalent film thickness. By subtracting this equivalent film component, the accuracy of the thickness measurement of the top copper layer with an eddy current sensor is improved and the absolute error is 3 nm for sampler measurement.
First-Principles-Driven Model-Based Optimal Control of the Current Profile in NSTX-U
NASA Astrophysics Data System (ADS)
Ilhan, Zeki; Barton, Justin; Wehner, William; Schuster, Eugenio; Gates, David; Gerhardt, Stefan; Kolemen, Egemen; Menard, Jonathan
2014-10-01
Regulation in time of the toroidal current profile is one of the main challenges toward the realization of the next-step operational goals for NSTX-U. A nonlinear, control-oriented, physics-based model describing the temporal evolution of the current profile is obtained by combining the magnetic diffusion equation with empirical correlations obtained at NSTX-U for the electron density, electron temperature, and non-inductive current drives. In this work, the proposed model is embedded into the control design process to synthesize a time-variant, linear-quadratic-integral, optimal controller capable of regulating the safety factor profile around a desired target profile while rejecting disturbances. Neutral beam injectors and the total plasma current are used as actuators to shape the current profile. The effectiveness of the proposed controller in regulating the safety factor profile in NSTX-U is demonstrated via closed-loop predictive simulations carried out in PTRANSP. Supported by PPPL.
Diffusion Decision Model: Current Issues and History
Ratcliff, Roger; Smith, Philip L.; Brown, Scott D.; McKoon, Gail
2016-01-01
There is growing interest in diffusion models to represent the cognitive and neural processes of speeded decision making. Sequential-sampling models like the diffusion model have a long history in psychology. They view decision making as a process of noisy accumulation of evidence from a stimulus. The standard model assumes that evidence accumulates at a constant rate during the second or two it takes to make a decision. This process can be linked to the behaviors of populations of neurons and to theories of optimality. Diffusion models have been used successfully in a range of cognitive tasks and as psychometric tools in clinical research to examine individual differences. In this article, we relate the models to both earlier and more recent research in psychology. PMID:26952739
Hydrological processes and model representation: impact of soft data on calibration
J.G. Arnold; M.A. Youssef; H. Yen; M.J. White; A.Y. Sheshukov; A.M. Sadeghi; D.N. Moriasi; J.L. Steiner; Devendra Amatya; R.W. Skaggs; E.B. Haney; J. Jeong; M. Arabi; P.H. Gowda
2015-01-01
Hydrologic and water quality models are increasingly used to determine the environmental impacts of climate variability and land management. Due to differing model objectives and differences in monitored data, there are currently no universally accepted procedures for model calibration and validation in the literature. In an effort to develop accepted model calibration...
Characterizing and Assessing a Large-Scale Software Maintenance Organization
NASA Technical Reports Server (NTRS)
Briand, Lionel; Melo, Walcelio; Seaman, Carolyn; Basili, Victor
1995-01-01
One important component of a software process is the organizational context in which the process is enacted. This component is often missing or incomplete in current process modeling approaches. One technique for modeling this perspective is the Actor-Dependency (AD) Model. This paper reports on a case study which used this approach to analyze and assess a large software maintenance organization. Our goal was to identify the approach's strengths and weaknesses while providing practical recommendations for improvement and research directions. The AD model was found to be very useful in capturing the important properties of the organizational context of the maintenance process, and aided in the understanding of the flaws found in this process. However, a number of opportunities for extending and improving the AD model were identified. Among others, there is a need to incorporate quantitative information to complement the qualitative model.
Predicting Student Performance in a Collaborative Learning Environment
ERIC Educational Resources Information Center
Olsen, Jennifer K.; Aleven, Vincent; Rummel, Nikol
2015-01-01
Student models for adaptive systems may not model collaborative learning optimally. Past research has either focused on modeling individual learning or for collaboration, has focused on group dynamics or group processes without predicting learning. In the current paper, we adjust the Additive Factors Model (AFM), a standard logistic regression…
Improvement of radiology services based on the process management approach.
Amaral, Creusa Sayuri Tahara; Rozenfeld, Henrique; Costa, Janaina Mascarenhas Hornos; Magon, Maria de Fátima de Andrade; Mascarenhas, Yvone Maria
2011-06-01
The health sector requires continuous investments to ensure the improvement of products and services from a technological standpoint, the use of new materials, equipment and tools, and the application of process management methods. Methods associated with the process management approach, such as the development of reference models of business processes, can provide significant innovations in the health sector and respond to the current market trend for modern management in this sector (Gunderman et al. (2008) [4]). This article proposes a process model for diagnostic medical X-ray imaging, from which it derives a primary reference model and describes how this information leads to gains in quality and improvements. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Modeling, simulation and control of pulsed DE-GMA welding process for joining of aluminum to steel
NASA Astrophysics Data System (ADS)
Zhang, Gang; Shi, Yu; Li, Jie; Huang, Jiankang; Fan, Ding
2014-09-01
Joining of aluminum to steel has attracted significant attention from the welding research community, automotive and rail transportation industries. Many current welding methods have been developed and applied, however, they can not precisely control the heat input to work-piece, they are high costs, low efficiency and consist lots of complex welding devices, and the generated intermetallic compound layer in weld bead interface is thicker. A novel pulsed double electrode gas metal arc welding(Pulsed DE-GMAW) method is developed. To achieve a stable welding process for joining of aluminum to steel, a mathematical model of coupled arc is established, and a new control scheme that uses the average feedback arc voltage of main loop to adjust the wire feed speed to control coupled arc length is proposed and developed. Then, the impulse control simulation of coupled arc length, wire feed speed and wire extension is conducted to demonstrate the mathematical model and predict the stability of welding process by changing the distance of contact tip to work-piece(CTWD). To prove the proposed PSO based PID control scheme's feasibility, the rapid prototyping experimental system is setup and the bead-on-plate control experiments are conducted to join aluminum to steel. The impulse control simulation shows that the established model can accurately represent the variation of coupled arc length, wire feed speed and the average main arc voltage when the welding process is disturbed, and the developed controller has a faster response and adjustment, only runs about 0.1 s. The captured electric signals show the main arc voltage gradually closes to the supposed arc voltage by adjusting the wire feed speed in 0.8 s. The obtained typical current waveform demonstrates that the main current can be reduced by controlling the bypass current under maintaining a relative large total current. The control experiment proves the accuracy of proposed model and feasibility of new control scheme further. The beautiful and smooth weld beads are also obtained by this method. Pulsed DE-GMAW can thus be considered as an alternative method for low cost, high efficiency joining of aluminum to steel.
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2017-12-01
Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.
NASA Astrophysics Data System (ADS)
Yu, Fei; Ma, Xiaoyu; Deng, Wanling; Liou, Juin J.; Huang, Junkai
2017-11-01
A physics-based drain current compact model for amorphous InGaZnO (a-InGaZnO) thin-film transistors (TFTs) is proposed. As a key feature, the surface potential model accounts for both exponential tail and deep trap densities of states, which are essential to describe a-InGaZnO TFT electrical characteristics. The surface potential is solved explicitly without the process of amendment and suitable for circuit simulations. Furthermore, based on the surface potential, an explicit closed-form expression of the drain current is developed. For the cases of the different operational voltages, surface potential and drain current are verified by numerical results and experimental data, respectively. As a result, our model can predict DC characteristics of a-InGaZnO TFTs.
Ultrafast dynamics of photoexcited charge and spin currents in semiconductor nanostructures
NASA Astrophysics Data System (ADS)
Meier, Torsten; Pasenow, Bernhard; Duc, Huynh Thanh; Vu, Quang Tuyen; Haug, Hartmut; Koch, Stephan W.
2007-02-01
Employing the quantum interference among one- and two-photon excitations induced by ultrashort two-color laser pulses it is possible to generate charge and spin currents in semiconductors and semiconductor nanostructures on femtosecond time scales. Here, it is reviewed how the excitation process and the dynamics of such photocurrents can be described on the basis of a microscopic many-body theory. Numerical solutions of the semiconductor Bloch equations (SBE) provide a detailed description of the time-dependent material excitations. Applied to the case of photocurrents, numerical solutions of the SBE for a two-band model including many-body correlations on the second-Born Markov level predict an enhanced damping of the spin current relative to that of the charge current. Interesting effects are obtained when the scattering processes are computed beyond the Markovian limit. Whereas the overall decay of the currents is basically correctly described already within the Markov approximation, quantum-kinetic calculations show that memory effects may lead to additional oscillatory signatures in the current transients. When transitions to coupled heavy- and light-hole valence bands are incorporated into the SBE, additional charge and spin currents, which are not described by the two-band model, appear.
Second Generation Crop Yield Models Review
NASA Technical Reports Server (NTRS)
Hodges, T. (Principal Investigator)
1982-01-01
Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.
NASA Astrophysics Data System (ADS)
Bastianon, E.; Viparelli, E.; Cantelli, A.; Imran, J.
2015-12-01
Primarily motivated by applications to hydrocarbon exploration, submarine minibasins have been widely studied during recent decades to understand the physical phenomenon that characterizes their fill process. Minibasins were identified in seismic records in the Gulf of Mexico, Angola, Trinidad and Tobago, Ireland, Nigeria and also in outcrops (e.g., Tres Pasos Formation, southern Chile). The filling of minibasis is generally described as the 'fill-and-spill' process, i.e. turbidity currents enter, are reflected on the minibasin flanks, pond and deposit suspended sediment. As the minibasin fills the turbidity current spills on the lowermost zone of the basin flank -spill point - and start filling the next basin downdip. Different versions of this simplified model were used to interpret field and laboratory data but it is still unclear how the minibasin size compared to the magnitude of the turbidity currents, the position of each basin in the system, and the slope of the minibasin system affects the characteristics of the deposit (e.g., geometry, grain size). Here, we conduct a numerical study to investigate how the 'fill-and-spill' model changes with increase in slopes of the minibasin system. First, we validate our numerical results against laboratory experiment performed on two linked minibasins located on a horizontal platform by comparing measured and simulated deposit geometries, suspended sediment concentration profiles and grain sizes. We then perform numerical simulations by increasing the minibasin system slope: deposit and flow characteristics are compared with the case of horizontal platform to identify how the depositional processes change. For the numerical study we used a three-dimensional numerical model of turbidity currents that solves the Reynolds-averaged Navier-Stokes equations for dilute suspensions. Turbulence is modeled by a buoyancy-modified k-ɛ closure. The numerical model has a deforming bottom boundary, to model the changes in the bed deposit due to erosion and deposition. Preliminary two dimensional simulations show that in the early stages of the fill process the suspended sediment concentration is higher in the first basin than in the second one, the coarse grain sizes are preferentially trapped in the updip basins and the fine sediment fractions spill into downdip basins.
Dual processing model of medical decision-making.
Djulbegovic, Benjamin; Hozo, Iztok; Beckstead, Jason; Tsalatsanis, Athanasios; Pauker, Stephen G
2012-09-03
Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. We show that physician's beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker's threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the large extent dominated by expected utility theory. The model also provides a platform for reconciling two groups of competing dual processing theories (parallel competitive with default-interventionalist theories).
A numerical study of the South China Sea Warm Current during winter monsoon relaxation
NASA Astrophysics Data System (ADS)
Zhang, Cong; Ding, Yang; Bao, Xianwen; Bi, Congcong; Li, Ruixiang; Zhang, Cunjie; Shen, Biao; Wan, Kai
2018-03-01
Using a Finite-Volume Community Ocean Model, we investigated the dynamic mechanism of the South China Sea Warm Current (SCSWC) in the northern South China Sea (NSCS) during winter monsoon relaxation. The model reproduces the mean surface circulation of the NSCS during winter, while model-simulated subtidal currents generally capture its current pattern. The model shows that the current over the continental shelf is generally southwestward, under a strong winter monsoon condition, but a northeastward counter-wind current usually develops between 50-and 100-m isobaths, when the monsoon relaxes. Model experiments, focusing on the wind relaxation process, show that sea level is elevated in the northwestern South China Sea (SCS), related to the persistent northeasterly monsoon. Following wind relaxation, a high sea level band builds up along the mid-shelf, and a northeastward current develops, having an obvious vertical barotropic structure. Momentum balance analysis indicates that an along-shelf pressure gradient provides the initial driving force for the SCSWC during the first few days following wind relaxation. The SCSWC subsequently reaches a steady quasi-geostrophic balance in the cross-shelf direction, mainly linked to sea level adjustment over the shelf. Lagrangian particle tracking experiments show that both the southwestward coastal current and slope current contribute to the northeastward movement of the SCSWC during winter monsoon relaxation.
NASA Astrophysics Data System (ADS)
Sembiring, L.; Van Ormondt, M.; Van Dongeren, A. R.; Roelvink, J. A.
2017-07-01
Rip currents are one of the most dangerous coastal hazards for swimmers. In order to minimize the risk, a coastal operational-process based-model system can be utilized in order to provide forecast of nearshore waves and currents that may endanger beach goers. In this paper, an operational model for rip current prediction by utilizing nearshore bathymetry obtained from video image technique is demonstrated. For the nearshore scale model, XBeach1 is used with which tidal currents, wave induced currents (including the effect of the wave groups) can be simulated simultaneously. Up-to-date bathymetry will be obtained using video images technique, cBathy 2. The system will be tested for the Egmond aan Zee beach, located in the northern part of the Dutch coastline. This paper will test the applicability of bathymetry obtained from video technique to be used as input for the numerical modelling system by comparing simulation results using surveyed bathymetry and model results using video bathymetry. Results show that the video technique is able to produce bathymetry converging towards the ground truth observations. This bathymetry validation will be followed by an example of operational forecasting type of simulation on predicting rip currents. Rip currents flow fields simulated over measured and modeled bathymetries are compared in order to assess the performance of the proposed forecast system.
Cardea: Providing Support for Dynamic Resource Access in a Distributed Computing Environment
NASA Technical Reports Server (NTRS)
Lepro, Rebekah
2003-01-01
The environment framing the modem authorization process span domains of administration, relies on many different authentication sources, and manages complex attributes as part of the authorization process. Cardea facilitates dynamic access control within this environment as a central function of an inter-operable authorization framework. The system departs from the traditional authorization model by separating the authentication and authorization processes, distributing the responsibility for authorization data and allowing collaborating domains to retain control over their implementation mechanisms. Critical features of the system architecture and its handling of the authorization process differentiate the system from existing authorization components by addressing common needs not adequately addressed by existing systems. Continuing system research seeks to enhance the implementation of the current authorization model employed in Cardea, increase the robustness of current features, further the framework for establishing trust and promote interoperability with existing security mechanisms.
Mathematical Model Of Variable-Polarity Plasma Arc Welding
NASA Technical Reports Server (NTRS)
Hung, R. J.
1996-01-01
Mathematical model of variable-polarity plasma arc (VPPA) welding process developed for use in predicting characteristics of welds and thus serves as guide for selection of process parameters. Parameters include welding electric currents in, and durations of, straight and reverse polarities; rates of flow of plasma and shielding gases; and sizes and relative positions of welding electrode, welding orifice, and workpiece.
Short Pulse UV-Visible Waveguide Laser.
1980-07-01
27 B. Relaxation Processes ...... ................... ... 30 C. Equivalent Circuit ...... .................... ... 33 II V. KINETIC MODELING...101 2 2-() 0 10 20 30 40 TIME (nsec) Fig. 6 Temporal evolution of the current, various N +densities, and the electron density as revealed by the...processes consisting of dissociative 30 * TABLE 1 RELAXATION REACTION RATES USED IN THE He-N MODEL 2 Reaction Rate. Reference Helium Metastable Reactions 1
Update: Validation, Edits, and Application Processing. Phase II and Error-Prone Model Report.
ERIC Educational Resources Information Center
Gray, Susan; And Others
An update to the Validation, Edits, and Application Processing and Error-Prone Model Report (Section 1, July 3, 1980) is presented. The objective is to present the most current data obtained from the June 1980 Basic Educational Opportunity Grant applicant and recipient files and to determine whether the findings reported in Section 1 of the July…
Dissection of the Voltage Losses of an Acidic Quinone Redox Flow Battery
Chen, Qing; Gerhardt, Michael R.; Aziz, Michael J.
2017-03-28
We measure the polarization characteristics of a quinone-bromide redox flow battery with interdigitated flow fields, using electrochemical impedance spectroscopy and voltammetry of a full cell and of a half cell against a reference electrode. We find linear polarization behavior at 50% state of charge all the way to the short-circuit current density of 2.5 A/cm 2. We uniquely identify the polarization area-specific resistance (ASR) of each electrode, the membrane ASR to ionic current, and the electronic contact ASR. We use voltage probes to deduce the electronic current density through each sheet of carbon paper in the quinone-bearing electrode. By alsomore » interpreting the results using the Newman 1-D porous electrode model, we deduce the volumetric exchange current density of the porous electrode. We uniquely evaluate the power dissipation and identify a correspondence to the contributions to the electrode ASR from the faradaic, electronic, and ionic transport processes. We find that, within the electrode, more power is dissipated in the faradaic process than in the electronic and ionic conduction processes combined, despite the observed linear polarization behavior. We examine the sensitivity of the ASR to the values of the model parameters. The greatest performance improvement is anticipated from increasing the volumetric exchange current density.« less
Spike train generation and current-to-frequency conversion in silicon diodes
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A device physics model is developed to analyze spontaneous neuron-like spike train generation in current driven silicon p(+)-n-n(+) devices in cryogenic environments. The model is shown to explain the very high dynamic range (0 to the 7th) current-to-frequency conversion and experimental features of the spike train frequency as a function of input current. The devices are interesting components for implementation of parallel asynchronous processing adjacent to cryogenically cooled focal planes because of their extremely low current and power requirements, their electronic simplicity, and their pulse coding capability, and could be used to form the hardware basis for neural networks which employ biologically plausible means of information coding.
NASA Astrophysics Data System (ADS)
Lee, K. David; Colony, Mike
2011-06-01
Modeling and simulation has been established as a cost-effective means of supporting the development of requirements, exploring doctrinal alternatives, assessing system performance, and performing design trade-off analysis. The Army's constructive simulation for the evaluation of equipment effectiveness in small combat unit operations is currently limited to representation of situation awareness without inclusion of the many uncertainties associated with real world combat environments. The goal of this research is to provide an ability to model situation awareness and decision process uncertainties in order to improve evaluation of the impact of battlefield equipment on ground soldier and small combat unit decision processes. Our Army Probabilistic Inference and Decision Engine (Army-PRIDE) system provides this required uncertainty modeling through the application of two critical techniques that allow Bayesian network technology to be applied to real-time applications. (Object-Oriented Bayesian Network methodology and Object-Oriented Inference technique). In this research, we implement decision process and situation awareness models for a reference scenario using Army-PRIDE and demonstrate its ability to model a variety of uncertainty elements, including: confidence of source, information completeness, and information loss. We also demonstrate that Army-PRIDE improves the realism of the current constructive simulation's decision processes through Monte Carlo simulation.
Tan, Shih-Wei; Lai, Shih-Wen
2012-01-01
Characterization and modeling of metal-semiconductor-metal (MSM) GaAs diodes using to evaporate SiO2 and Pd simultaneously as a mixture electrode (called M-MSM diodes) compared with similar to evaporate Pd as the electrode (called Pd-MSM diodes) were reported. The barrier height (φ b) and the Richardson constant (A*) were carried out for the thermionic-emission process to describe well the current transport for Pd-MSM diodes in the consideration of the carrier over the metal-semiconductor barrier. In addition, in the consideration of the carrier over both the metal-semiconductor barrier and the insulator-semiconductor barrier simultaneously, thus the thermionic-emission process can be used to describe well the current transport for M-MSM diodes. Furthermore, in the higher applied voltage, the carrier recombination will be taken into discussion. Besides, a composite-current (CC) model is developed to evidence the concepts. Our calculated results are in good agreement with the experimental ones. PMID:23226352
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haurykiewicz, John Paul; Dinehart, Timothy Grant; Parker, Robert Young
2016-05-12
The purpose of this process analysis was to analyze the Badge Offices’ current processes from a systems perspective and consider ways of pursuing objectives set forth by SEC-PS, namely increased customer flow (throughput) and reduced customer wait times. Information for the analysis was gathered for the project primarily through Badge Office Subject Matter Experts (SMEs), and in-person observation of prevailing processes. Using the information gathered, a process simulation model was constructed to represent current operations and allow assessment of potential process changes relative to factors mentioned previously. The overall purpose of the analysis was to provide SEC-PS management with informationmore » and recommendations to serve as a basis for additional focused study and areas for potential process improvements in the future.« less
NASA Technical Reports Server (NTRS)
Swickrath, Michael J.; Anderson, Molly
2011-01-01
Through the respiration process, humans consume oxygen (O2) while producing carbon dioxide (CO2) and water (H2O) as byproducts. For long term space exploration, CO2 concentration in the atmosphere must be managed to prevent hypercapnia. Moreover, CO2 can be used as a source of oxygen through chemical reduction serving to minimize the amount of oxygen required at launch. Reduction can be achieved through a number of techniques. The National Aeronautics and Space Administration (NASA) is currently exploring the Sabatier reaction, the Bosch reaction, and co-electrolysis of CO2 and H2O for this process. Proof-of-concept experiments and prototype units for all three processes have proven capable of returning useful commodities for space exploration. While all three techniques have demonstrated the capacity to reduce CO2 in the laboratory, there is interest in understanding how all three techniques would perform at a system-level within a spacecraft. Consequently, there is an impetus to develop predictive models for these processes that can be readily re-scaled and integrated into larger system models. Such analysis tools provide the ability to evaluate each technique on a comparable basis with respect to processing rates. This manuscript describes the current models for the carbon dioxide reduction processes under parallel developmental e orts. Comparison to experimental data is provided were available for veri cation purposes.
High-resolution modelling of waves, currents and sediment transport in the Catalan Sea.
NASA Astrophysics Data System (ADS)
Sánchez-Arcilla, Agustín; Grifoll, Manel; Pallares, Elena; Espino, Manuel
2013-04-01
In order to investigate coastal shelf dynamics, a sequence of high resolution multi-scale models have been implemented for the Catalan shelf (North-western Mediterranean Sea). The suite consists of a set of increasing-resolution nested models, based on the circulation model ROMS (Regional Ocean Modelling System), the wave model SWAN (Simulation Waves Nearshore) and the sediment transport model CSTM (Community Sediment Transport Model), covering different ranges of spatial (from ~1 km at shelf-slope regions to ~40 m around river mouth or local beaches) and temporal scales (from storms events to seasonal variability). Contributions in the understanding of local processes such as along-shelf dynamics in the inner-shelf, sediment dispersal from the river discharge or bi-directional wave-current interactions under different synoptic conditions and resolution have been obtained using the Catalan Coast as a pilot site. Numerical results have been compared with "ad-hoc" intensive field campaigns, data from observational models and remote sensing products. The results exhibit acceptable agreement with observations and the investigation has allowed developing generic knowledge and more efficient (process-based) strategies for the coastal and shelf management.
Ecosystem Restoration: Fact or Fancy?
John A. Stanturf; Callie J. Schweitzer; Stephen H. Schoenholtz; James P. Barnett; Charles K. McMahon; Donald J. Tomszak
1998-01-01
Ecological restoration is generally accepted as the reestablishment of natural ecological processes that produce certain dynamic ecosystem properties of structure, function, and processes. But restore to what? The most frequently used conceptual model for the restoration process is the shift of conditions from some current (degraded) dynamic state to some past dynamic...
DEVELOPMENT OF THE U.S. EPA'S METAL FINISHING FACILITY POLLUTION PREVENTION TOOL
Metal finishing processes are a type of chemical processes and can be modeled using Computer Aided Process Engineering (CAPE). Currently, the U.S. EPA is developing the Metal Finishing Facility Pollution Prevention Tool (MFFP2T), a pollution prevention software tool for the meta...
Word Recognition in Auditory Cortex
ERIC Educational Resources Information Center
DeWitt, Iain D. J.
2013-01-01
Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…
Hiller, Rachel M; Johnston, Anna; Dohnt, Hayley; Lovato, Nicole; Gradisar, Michael
2015-10-01
Cognitive processes play an important role in the maintenance, and treatment of sleep difficulties, including insomnia. In 2002, a comprehensive model was proposed by Harvey. Since its inception the model has received >300 citations, and provided researchers and clinicians with a framework for understanding and treating insomnia. The aim of this review is two-fold. First, we review the current literature investigating each factor proposed in Harvey's cognitive model of insomnia. Second, we summarise the psychometric properties of key measures used to assess the model's factors and mechanisms. From these aims, we demonstrate both strengths and limitations of the current knowledge of appropriate measurements associated with the model. This review aims to stimulate and guide future research in this area; and provide an understanding of the resources available to measure, target, and resolve cognitive factors that may maintain chronic insomnia. Copyright © 2014 Elsevier Ltd. All rights reserved.
Adapting viral safety assurance strategies to continuous processing of biological products.
Johnson, Sarah A; Brown, Matthew R; Lute, Scott C; Brorson, Kurt A
2017-01-01
There has been a recent drive in commercial large-scale production of biotechnology products to convert current batch mode processing to continuous processing manufacturing. There have been reports of model systems capable of adapting and linking upstream and downstream technologies into a continuous manufacturing pipeline. However, in many of these proposed continuous processing model systems, viral safety has not been comprehensively addressed. Viral safety and detection is a highly important and often expensive regulatory requirement for any new biological product. To ensure success in the adaption of continuous processing to large-scale production, there is a need to consider the development of approaches that allow for seamless incorporation of viral testing and clearance/inactivation methods. In this review, we outline potential strategies to apply current viral testing and clearance/inactivation technologies to continuous processing, as well as modifications of existing unit operations to ensure the successful integration of viral clearance into the continuous processing of biological products. Biotechnol. Bioeng. 2017;114: 21-32. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Design of experiments enhanced statistical process control for wind tunnel check standard testing
NASA Astrophysics Data System (ADS)
Phillips, Ben D.
The current wind tunnel check standard testing program at NASA Langley Research Center is focused on increasing data quality, uncertainty quantification and overall control and improvement of wind tunnel measurement processes. The statistical process control (SPC) methodology employed in the check standard testing program allows for the tracking of variations in measurements over time as well as an overall assessment of facility health. While the SPC approach can and does provide researchers with valuable information, it has certain limitations in the areas of process improvement and uncertainty quantification. It is thought by utilizing design of experiments methodology in conjunction with the current SPC practices that one can efficiently and more robustly characterize uncertainties and develop enhanced process improvement procedures. In this research, methodologies were developed to generate regression models for wind tunnel calibration coefficients, balance force coefficients and wind tunnel flow angularities. The coefficients of these regression models were then tracked in statistical process control charts, giving a higher level of understanding of the processes. The methodology outlined is sufficiently generic such that this research can be applicable to any wind tunnel check standard testing program.
Bridging groundwater models and decision support with a Bayesian network
Fienen, Michael N.; Masterson, John P.; Plant, Nathaniel G.; Gutierrez, Benjamin T.; Thieler, E. Robert
2013-01-01
Resource managers need to make decisions to plan for future environmental conditions, particularly sea level rise, in the face of substantial uncertainty. Many interacting processes factor in to the decisions they face. Advances in process models and the quantification of uncertainty have made models a valuable tool for this purpose. Long-simulation runtimes and, often, numerical instability make linking process models impractical in many cases. A method for emulating the important connections between model input and forecasts, while propagating uncertainty, has the potential to provide a bridge between complicated numerical process models and the efficiency and stability needed for decision making. We explore this using a Bayesian network (BN) to emulate a groundwater flow model. We expand on previous approaches to validating a BN by calculating forecasting skill using cross validation of a groundwater model of Assateague Island in Virginia and Maryland, USA. This BN emulation was shown to capture the important groundwater-flow characteristics and uncertainty of the groundwater system because of its connection to island morphology and sea level. Forecast power metrics associated with the validation of multiple alternative BN designs guided the selection of an optimal level of BN complexity. Assateague island is an ideal test case for exploring a forecasting tool based on current conditions because the unique hydrogeomorphological variability of the island includes a range of settings indicative of past, current, and future conditions. The resulting BN is a valuable tool for exploring the response of groundwater conditions to sea level rise in decision support.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiskumara, R.; Joshi, R. P., E-mail: ravi.joshi@ttu.edu; Mauch, D.
A model-based analysis of the steady-state, current-voltage response of semi-insulating 4H-SiC is carried out to probe the internal mechanisms, focusing on electric field driven effects. Relevant physical processes, such as multiple defects, repulsive potential barriers to electron trapping, band-to-trap impact ionization, and field-dependent detrapping, are comprehensively included. Results of our model match the available experimental data fairly well over orders of magnitude variation in the current density. A number of important parameters are also extracted in the process through comparisons with available data. Finally, based on our analysis, the possible presence of holes in the samples can be discounted upmore » to applied fields as high as ∼275 kV/cm.« less
Modelling tidewater glacier calving: from detailed process models to simple calving laws
NASA Astrophysics Data System (ADS)
Benn, Doug; Åström, Jan; Zwinger, Thomas; Todd, Joe; Nick, Faezeh
2017-04-01
The simple calving laws currently used in ice sheet models do not adequately reflect the complexity and diversity of calving processes. To be effective, calving laws must be grounded in a sound understanding of how calving actually works. We have developed a new approach to formulating calving laws, using a) the Helsinki Discrete Element Model (HiDEM) to explicitly model fracture and calving processes, and b) the full-Stokes continuum model Elmer/Ice to identify critical stress states associated with HiDEM calving events. A range of observed calving processes emerges spontaneously from HiDEM in response to variations in ice-front buoyancy and the size of subaqueous undercuts, and we show that HiDEM calving events are associated with characteristic stress patterns simulated in Elmer/Ice. Our results open the way to developing calving laws that properly reflect the diversity of calving processes, and provide a framework for a unified theory of the calving process continuum.
Computational Modeling in Structural Materials Processing
NASA Technical Reports Server (NTRS)
Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.
Modeling Nitrogen Oxides in the Lower Stratosphere
NASA Technical Reports Server (NTRS)
Kawa, S. Randy; Einaudi, Franco (Technical Monitor)
2001-01-01
This talk will focus on the status of current understanding (not a historical review) as regards modeling nitrogen oxides (NOy) in the lower stratosphere (LS). The presentation will be organized around three major areas of process understanding: 1) NOy sources, sinks, and transport to the LS, 2) NOy species partitioning, and 3) polar multiphase processes. In each area, process topics will be identified with an estimate of the degree of confidence associated with their representation in numerical models. Several exotic and/or speculative processes will also be discussed. Those topics associated with low confidence or knowledge gaps, weighted by their prospective importance in stratospheric chemical modeling, will be collected into recommendations for further study. Suggested approaches to further study will be presented for discussion.
Bridging paradigms: hybrid mechanistic-discriminative predictive models.
Doyle, Orla M; Tsaneva-Atansaova, Krasimira; Harte, James; Tiffin, Paul A; Tino, Peter; Díaz-Zuccarini, Vanessa
2013-03-01
Many disease processes are extremely complex and characterized by multiple stochastic processes interacting simultaneously. Current analytical approaches have included mechanistic models and machine learning (ML), which are often treated as orthogonal viewpoints. However, to facilitate truly personalized medicine, new perspectives may be required. This paper reviews the use of both mechanistic models and ML in healthcare as well as emerging hybrid methods, which are an exciting and promising approach for biologically based, yet data-driven advanced intelligent systems.
Pseudo and conditional score approach to joint analysis of current count and current status data.
Wen, Chi-Chung; Chen, Yi-Hau
2018-04-17
We develop a joint analysis approach for recurrent and nonrecurrent event processes subject to case I interval censorship, which are also known in literature as current count and current status data, respectively. We use a shared frailty to link the recurrent and nonrecurrent event processes, while leaving the distribution of the frailty fully unspecified. Conditional on the frailty, the recurrent event is assumed to follow a nonhomogeneous Poisson process, and the mean function of the recurrent event and the survival function of the nonrecurrent event are assumed to follow some general form of semiparametric transformation models. Estimation of the models is based on the pseudo-likelihood and the conditional score techniques. The resulting estimators for the regression parameters and the unspecified baseline functions are shown to be consistent with rates of square and cubic roots of the sample size, respectively. Asymptotic normality with closed-form asymptotic variance is derived for the estimator of the regression parameters. We apply the proposed method to a fracture-osteoporosis survey data to identify risk factors jointly for fracture and osteoporosis in elders, while accounting for association between the two events within a subject. © 2018, The International Biometric Society.
Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.
Renner, Ian W; Warton, David I
2013-03-01
Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.
Utilizing DMAIC six sigma and evidence-based medicine to streamline diagnosis in chest pain.
Kumar, Sameer; Thomas, Kory M
2010-01-01
The purpose of this study was to quantify the difference between the current process flow model for a typical patient workup for chest pain and development of a new process flow model that incorporates DMAIC (define, measure, analyze, improve, control) Six Sigma and evidence-based medicine in a best practices model for diagnosis and treatment. The first stage, DMAIC Six Sigma, is used to highlight areas of variability and unnecessary tests in the current process flow for a patient presenting to the emergency department or physician's clinic with chest pain (also known as angina). The next stage, patient process flow, utilizes DMAIC results in the development of a simulated model that represents real-world variability in the diagnosis and treatment of a patient presenting with angina. The third and final stage is used to analyze the evidence-based output and quantify the factors that drive physician diagnosis accuracy and treatment, as well as review the potential for a broad national evidence-based database. Because of the collective expertise captured within the computer-oriented evidence-based model, the study has introduced an innovative approach to health care delivery by bringing expert-level care to any physician triaging a patient for chest pain anywhere in the world. Similar models can be created for other ailments as well, such as headache, gastrointestinal upset, and back pain. This updated way of looking at diagnosing patients stemming from an evidence-based best practice decision support model may improve workflow processes and cost savings across the health care continuum.
Reuse: A knowledge-based approach
NASA Technical Reports Server (NTRS)
Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui
1992-01-01
This paper describes our research in automating the reuse process through the use of application domain models. Application domain models are explicit formal representations of the application knowledge necessary to understand, specify, and generate application programs. Furthermore, they provide a unified repository for the operational structure, rules, policies, and constraints of a specific application area. In our approach, domain models are expressed in terms of a transaction-based meta-modeling language. This paper has described in detail the creation and maintenance of hierarchical structures. These structures are created through a process that includes reverse engineering of data models with supplementary enhancement from application experts. Source code is also reverse engineered but is not a major source of domain model instantiation at this time. In the second phase of the software synthesis process, program specifications are interactively synthesized from an instantiated domain model. These specifications are currently integrated into a manual programming process but will eventually be used to derive executable code with mechanically assisted transformations. This research is performed within the context of programming-in-the-large types of systems. Although our goals are ambitious, we are implementing the synthesis system in an incremental manner through which we can realize tangible results. The client/server architecture is capable of supporting 16 simultaneous X/Motif users and tens of thousands of attributes and classes. Domain models have been partially synthesized from five different application areas. As additional domain models are synthesized and additional knowledge is gathered, we will inevitably add to and modify our representation. However, our current experience indicates that it will scale and expand to meet our modeling needs.
Assessing Understanding of Biological Processes: Elucidating Students' Models of Meiosis.
ERIC Educational Resources Information Center
Kindfield, Ann C.
1994-01-01
Presents a meiosis reasoning problem that provides direct access to students' current models of chromosomes and meiosis. Also included in the article are tips for classroom implementation and a summary of the solution evaluation. (ZWH)
Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment
NASA Technical Reports Server (NTRS)
Ancel, Ersin; Shih, Ann T.
2014-01-01
NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.
Quegan, Shaun; Banwart, Steven A.
2017-01-01
Enhanced weathering (EW) aims to amplify a natural sink for CO2 by incorporating powdered silicate rock with high reactive surface area into agricultural soils. The goal is to achieve rapid dissolution of minerals and release of alkalinity with accompanying dissolution of CO2 into soils and drainage waters. EW could counteract phosphorus limitation and greenhouse gas (GHG) emissions in tropical soils, and soil acidification, a common agricultural problem studied with numerical process models over several decades. Here, we review the processes leading to soil acidification in croplands and how the soil weathering CO2 sink is represented in models. Mathematical models capturing the dominant processes and human interventions governing cropland soil chemistry and GHG emissions neglect weathering, while most weathering models neglect agricultural processes. We discuss current approaches to modelling EW and highlight several classes of model having the potential to simulate EW in croplands. Finally, we argue for further integration of process knowledge in mathematical models to capture feedbacks affecting both longer-term CO2 consumption and crop growth and yields. PMID:28381633
Painting a new picture of personalised medicine for diabetes.
McCarthy, Mark I
2017-05-01
The current focus on delivery of personalised (or precision) medicine reflects the expectation that developments in genomics, imaging and other domains will extend our diagnostic and prognostic capabilities, and enable more effective targeting of current and future preventative and therapeutic options. The clinical benefits of this approach are already being realised in rare diseases and cancer but the impact on management of complex diseases, such as type 2 diabetes, remains limited. This may reflect reliance on inappropriate models of disease architecture, based around rare, high-impact genetic and environmental exposures that are poorly suited to our emerging understanding of type 2 diabetes. This review proposes an alternative 'palette' model, centred on a molecular taxonomy that focuses on positioning an individual with respect to the major pathophysiological processes that contribute to diabetes risk and progression. This model anticipates that many individuals with diabetes will have multiple parallel defects that affect several of these processes. One corollary of this model is that research efforts should, at least initially, be targeted towards identifying and characterising individuals whose adverse metabolic trajectory is dominated by perturbation in a restricted set of processes.
NASA Astrophysics Data System (ADS)
Yiran, P.; Li, J.; von Salzen, K.; Dai, T.; Liu, D.
2014-12-01
Mineral dust is a significant contributor to global and Asian aerosol burden. Currently, large uncertainties still exist in simulated aerosol processes in global climate models (GCMs), which lead to a diversity in dust mass loading and spatial distribution of GCM projections. In this study, satellite measurements from CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) and observed aerosol data from Asian stations are compared with modelled aerosol in the Canadian Atmospheric Global Climate Model (CanAM4.2). Both seasonal and annual variations in Asian dust distribution are investigated. Vertical profile of simulated aerosol in troposphere is evaluated with CALIOP Level 3 products and local observed extinction for dust and total aerosols. Physical processes in GCM such as horizontal advection, vertical mixing, dry and wet removals are analyzed according to model simulation and available measurements of aerosol. This work aims to improve current understanding of Asian dust transport and vertical exchange on a large scale, which may help to increase the accuracy of GCM simulation on aerosols.
Small aquarium fishes provide a model organism that recapitulates the development, physiology and specific disease processes present in humans without the many limitations of rodent-based models currently in use. Fish models offer advantages in cost, rapid life-cycles, and extern...
Modelling the growth process of porous aluminum oxide film during anodization
NASA Astrophysics Data System (ADS)
Aryslanova, E. M.; Alfimov, A. V.; Chivilikhin, S. A.
2015-11-01
Currently it has become important for the development of metamaterials and nanotechnology to obtain regular self-assembled structures. One such structure is porous anodic alumina film that consists of hexagonally packed cylindrical pores. In this work we consider the anodization process, our model takes into account the influence of layers of aluminum and electrolyte on the rate of growth of aluminum oxide, as well as the effect of surface diffusion. In present work we consider those effects. And as a result of our model we obtain the minimum distance between centers of alumina pores in the beginning of anodizing process.
Sinusoidal current and stress evolutions in lithium-ion batteries
NASA Astrophysics Data System (ADS)
Yang, Xiao-Guang; Bauer, Christoph; Wang, Chao-Yang
2016-09-01
Mechanical breakdown of graphite materials due to diffusion-induced stress (DIS) is a key aging mechanism of lithium-ion batteries. In this work, electrochemical-thermal coupled model along with a DIS model is developed to study the DIS distribution across the anode thickness. Special attention is paid to the evolution behavior of surface tangential stress (STS) in the discharge process for graphite at different locations of the anode. For the first time, we report that the evolution of STS, as well as local current, at all locations of the anode, evolve like sinusoidal waves in the discharge process with several crests and troughs. The staging behavior of graphite active material, in particular the sharp change of open-circuit potential (OCP) of graphite in the region between two plateaus, is found to be the root cause for the sinusoidal patterns of current and stress evolution. Furthermore, the effects of various parameters, such as starting state of charge, discharge C-rate and electrode thickness on the current and stress evolutions are investigated.
The PEcAn Project: Accessible Tools for On-demand Ecosystem Modeling
NASA Astrophysics Data System (ADS)
Cowdery, E.; Kooper, R.; LeBauer, D.; Desai, A. R.; Mantooth, J.; Dietze, M.
2014-12-01
Ecosystem models play a critical role in understanding the terrestrial biosphere and forecasting changes in the carbon cycle, however current forecasts have considerable uncertainty. The amount of data being collected and produced is increasing on daily basis as we enter the "big data" era, but only a fraction of this data is being used to constrain models. Until we can improve the problems of model accessibility and model-data communication, none of these resources can be used to their full potential. The Predictive Ecosystem Analyzer (PEcAn) is an ecoinformatics toolbox and a set of workflows that wrap around an ecosystem model and manage the flow of information in and out of regional-scale TBMs. Here we present new modules developed in PEcAn to manage the processing of meteorological data, one of the primary driver dependencies for ecosystem models. The module downloads, reads, extracts, and converts meteorological observations to Unidata Climate Forecast (CF) NetCDF community standard, a convention used for most climate forecast and weather models. The module also automates the conversion from NetCDF to model specific formats, including basic merging, gap-filling, and downscaling procedures. PEcAn currently supports tower-based micrometeorological observations at Ameriflux and FluxNET sites, site-level CSV-formatted data, and regional and global reanalysis products such as the North American Regional Reanalysis and CRU-NCEP. The workflow is easily extensible to additional products and processing algorithms.These meteorological workflows have been coupled with the PEcAn web interface and now allow anyone to run multiple ecosystem models for any location on the Earth by simply clicking on an intuitive Google-map based interface. This will allow users to more readily compare models to observations at those sites, leading to better calibration and validation. Current work is extending these workflows to also process field, remotely-sensed, and historical observations of vegetation composition and structure. The processing of heterogeneous met and veg data within PEcAn is made possible using the Brown Dog cyberinfrastructure tools for unstructured data.
Clarification process: Resolution of decision-problem conditions
NASA Technical Reports Server (NTRS)
Dieterly, D. L.
1980-01-01
A model of a general process which occurs in both decisionmaking and problem-solving tasks is presented. It is called the clarification model and is highly dependent on information flow. The model addresses the possible constraints of individual indifferences and experience in achieving success in resolving decision-problem conditions. As indicated, the application of the clarification process model is only necessary for certain classes of the basic decision-problem condition. With less complex decision problem conditions, certain phases of the model may be omitted. The model may be applied across a wide range of decision problem conditions. The model consists of two major components: (1) the five-phase prescriptive sequence (based on previous approaches to both concepts) and (2) the information manipulation function (which draws upon current ideas in the areas of information processing, computer programming, memory, and thinking). The two components are linked together to provide a structure that assists in understanding the process of resolving problems and making decisions.
NASA Technical Reports Server (NTRS)
Gallagher, Dennis
2018-01-01
Outline - Inner Magnetosphere Effects: Historical Background; Main regions and transport processes: Ionosphere, Plasmasphere, Plasma sheet, Ring current, Radiation belt; Geomagnetic Activity: Storms, Substorm; Models.
Diffusion Decision Model: Current Issues and History.
Ratcliff, Roger; Smith, Philip L; Brown, Scott D; McKoon, Gail
2016-04-01
There is growing interest in diffusion models to represent the cognitive and neural processes of speeded decision making. Sequential-sampling models like the diffusion model have a long history in psychology. They view decision making as a process of noisy accumulation of evidence from a stimulus. The standard model assumes that evidence accumulates at a constant rate during the second or two it takes to make a decision. This process can be linked to the behaviors of populations of neurons and to theories of optimality. Diffusion models have been used successfully in a range of cognitive tasks and as psychometric tools in clinical research to examine individual differences. In this review, we relate the models to both earlier and more recent research in psychology. Copyright © 2016. Published by Elsevier Ltd.
Jian Yang; Peter J. Weisberg; Thomas E. Dilts; E. Louise Loudermilk; Robert M. Scheller; Alison Stanton; Carl Skinner
2015-01-01
Strategic fire and fuel management planning benefits from detailed understanding of how wildfire occurrences are distributed spatially under current climate, and from predictive models of future wildfire occurrence given climate change scenarios. In this study, we fitted historical wildfire occurrence data from 1986 to 2009 to a suite of spatial point process (SPP)...
Human Factors in the Design and Evaluation of Air Traffic Control Systems
1995-04-01
the controller must filter through and decipher. Fortunately, some of this is done without the need for conscious attention ; fcr example, a clear...components of an information-processing model ? ...................... 166 5.3 ATTENTION ......................................... 172 0 5.3.1 What is...processing? support of our performance of daily activities, including our (,) job tasks. Two models of attention currently in use assume that human infor
NASA Astrophysics Data System (ADS)
Jelínek, P.; Karlický, M.; Van Doorsselaere, T.; Bárta, M.
2017-10-01
Using the FLASH code, which solves the full set of the 2D non-ideal (resistive) time-dependent magnetohydrodynamic (MHD) equations, we study processes during the magnetic reconnection in a vertical gravitationally stratified current sheet. We show that during these processes, which correspond to processes in solar flares, plasmoids are formed due to the tearing mode instability of the current sheet. These plasmoids move upward or downward along the vertical current sheet and some of them merge into larger plasmoids. We study the density and temperature structure of these plasmoids and their time evolution in detail. We found that during the merging of two plasmoids, the resulting larger plasmoid starts to oscillate with a period largely determined by L/{c}{{A}}, where L is the size of the plasmoid and c A is the Alfvén speed in the lateral parts of the plasmoid. In our model, L/{c}{{A}} evaluates to ˜ 25 {{s}}. Furthermore, the plasmoid moving downward merges with the underlying flare arcade, which causes oscillations of the arcade. In our model, the period of this arcade oscillation is ˜ 35 {{s}}, which also corresponds to L/{c}{{A}}, but here L means the length of the loop and c A is the average Alfvén speed in the loop. We also show that the merging process of the plasmoid with the flare arcade is a complex process as presented by complex density and temperature structures of the oscillating arcade. Moreover, all these processes are associated with magnetoacoustic waves produced by the motion and merging of plasmoids.
Discharge processes and an electrical model of atmospheric pressure plasma jets in argon
NASA Astrophysics Data System (ADS)
Fang, Zhi; Shao, Tao; Yang, Jing; Zhang, Cheng
2016-01-01
In this paper, an atmospheric pressure plasma discharge in argon was generated using a needle-to-ring electrode configuration driven by a sinusoidal excitation voltage. The electric discharge processes and discharge characteristics were investigated by inspecting the voltage-current waveforms, Lissajous curves and lighting emission images. The change in discharge mode with applied voltage amplitude was studied and characterised, and three modes of corona discharge, dielectric barrier discharge (DBD) and jet discharge were identified, which appeared in turn with increasing applied voltage and can be distinguished clearly from the measured voltage-current waveforms, light-emission images and the changing gradient of discharge power with applied voltage. Based on the experimental results and discharge mechanism analysis, an equivalent electrical model and the corresponding equivalent circuit for characterising the whole discharge processes accurately was proposed, and the three discharge stages were characterised separately. A voltage-controlled current source (VCCS) associated with a resistance and a capacitance were used to represent the DBD stage, and the plasma plume and corona discharge were modelled by a variable capacitor in series with a variable resistor. Other factors that can influence the discharge, such as lead and stray capacitance values of the circuit, were also considered in the proposed model. Contribution to the Topical Issue "Recent Breakthroughs in Microplasma Science and Technology", edited by Kurt Becker, Jose Lopez, David Staack, Klaus-Dieter Weltmann and Wei Dong Zhu.
Transactions in domain-specific information systems
NASA Astrophysics Data System (ADS)
Zacek, Jaroslav
2017-07-01
Substantial number of the current information system (IS) implementations is based on transaction approach. In addition, most of the implementations are domain-specific (e.g. accounting IS, resource planning IS). Therefore, we have to have a generic transaction model to build and verify domain-specific IS. The paper proposes a new transaction model for domain-specific ontologies. This model is based on value oriented business process modelling technique. The transaction model is formalized by the Petri Net theory. First part of the paper presents common business processes and analyses related to business process modeling. Second part defines the transactional model delimited by REA enterprise ontology paradigm and introduces states of the generic transaction model. The generic model proposal is defined and visualized by the Petri Net modelling tool. Third part shows application of the generic transaction model. Last part of the paper concludes results and discusses a practical usability of the generic transaction model.
1990-06-01
levels -of- processing " ( Craik & Lockhart , 1972) en de "working memory" (Baddeley & Hitch, 1974) benaderingen deze verschijnselen beter...of storing information in LTS. 2.2 The levels -of- processing framework Craik and Lockhart (1972), in a very influential paper, proposed what they...Cognitive theory. (Vol. 1). (pp. 151-171). Hillsdale, N.J.: Erlbaum. Craik , F.l.M & Lockhart , R.S. (1972). Levels of processing : A frame- work
NASA Astrophysics Data System (ADS)
Maslovskaya, A. G.; Barabash, T. K.
2018-03-01
The paper presents the results of the fractal and multifractal analysis of polarization switching current in ferroelectrics under electron irradiation, which allows statistical memory effects to be estimated at dynamics of domain structure. The mathematical model of formation of electron beam-induced polarization current in ferroelectrics was suggested taking into account the fractal nature of domain structure dynamics. In order to realize the model the computational scheme was constructed using the numerical solution approximation of fractional differential equation. Evidences of electron beam-induced polarization switching process in ferroelectrics were specified at a variation of control model parameters.
Quantitative Analysis of the Efficiency of OLEDs.
Sim, Bomi; Moon, Chang-Ki; Kim, Kwon-Hyeon; Kim, Jang-Joo
2016-12-07
We present a comprehensive model for the quantitative analysis of factors influencing the efficiency of organic light-emitting diodes (OLEDs) as a function of the current density. The model takes into account the contribution made by the charge carrier imbalance, quenching processes, and optical design loss of the device arising from various optical effects including the cavity structure, location and profile of the excitons, effective radiative quantum efficiency, and out-coupling efficiency. Quantitative analysis of the efficiency can be performed with an optical simulation using material parameters and experimental measurements of the exciton profile in the emission layer and the lifetime of the exciton as a function of the current density. This method was applied to three phosphorescent OLEDs based on a single host, mixed host, and exciplex-forming cohost. The three factors (charge carrier imbalance, quenching processes, and optical design loss) were influential in different ways, depending on the device. The proposed model can potentially be used to optimize OLED configurations on the basis of an analysis of the underlying physical processes.
NASA Astrophysics Data System (ADS)
Aguilar-Arevalo, A. A.; Brown, B. C.; Bugel, L.; Cheng, G.; Church, E. D.; Conrad, J. M.; Dharmapalan, R.; Djurcic, Z.; Finley, D. A.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Grange, J.; Huelsnitz, W.; Ignarra, C.; Imlay, R.; Johnson, R. A.; Karagiorgi, G.; Katori, T.; Kobilarcik, T.; Louis, W. C.; Mariani, C.; Marsh, W.; Mills, G. B.; Mirabal, J.; Moore, C. D.; Mousseau, J.; Nienaber, P.; Osmanov, B.; Pavlovic, Z.; Perevalov, D.; Polly, C. C.; Ray, H.; Roe, B. P.; Russell, A. D.; Shaevitz, M. H.; Spitz, J.; Stancu, I.; Tayloe, R.; Van de Water, R. G.; Wascko, M. O.; White, D. H.; Wickremasinghe, D. A.; Zeller, G. P.; Zimmerman, E. D.
2013-08-01
The largest sample ever recorded of ν¯μ charged-current quasielastic (CCQE, ν¯μ+p→μ++n) candidate events is used to produce the minimally model-dependent, flux-integrated double-differential cross section (d2σ)/(dTμdcosθμ) for ν¯μ CCQE for a mineral oil target. This measurement exploits the large statistics of the MiniBooNE antineutrino mode sample and provides the most complete information of this process to date. In order to facilitate historical comparisons, the flux-unfolded total cross section σ(Eν) and single-differential cross section (dσ)/(dQ2) on both mineral oil and on carbon are also reported. The observed cross section is somewhat higher than the predicted cross section from a model assuming independently acting nucleons in carbon with canonical form factor values. The shape of the data are also discrepant with this model. These results have implications for intranuclear processes and can help constrain signal and background processes for future neutrino oscillation measurements.
Prospects for Precision Neutrino Cross Section Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Deborah A.
2016-01-28
The need for precision cross section measurements is more urgent now than ever before, given the central role neutrino oscillation measurements play in the field of particle physics. The definition of precision is something worth considering, however. In order to build the best model for an oscillation experiment, cross section measurements should span a broad range of energies, neutrino interaction channels, and target nuclei. Precision might better be defined not in the final uncertainty associated with any one measurement but rather with the breadth of measurements that are available to constrain models. Current experience shows that models are better constrainedmore » by 10 measurements across different processes and energies with 10% uncertainties than by one measurement of one process on one nucleus with a 1% uncertainty. This article describes the current status of and future prospects for the field of precision cross section measurements considering the metric of how many processes, energies, and nuclei have been studied.« less
The NASA Commercial Crew Program (CCP) Mission Assurance Process
NASA Technical Reports Server (NTRS)
Canfield, Amy
2016-01-01
In 2010, NASA established the Commercial Crew Program in order to provide human access to the International Space Station and low earth orbit via the commercial (non-governmental) sector. A particular challenge to NASA has been how to determine the commercial providers transportation system complies with Programmatic safety requirements. The process used in this determination is the Safety Technical Review Board which reviews and approves provider submitted Hazard Reports. One significant product of the review is a set of hazard control verifications. In past NASA programs, 100 percent of these safety critical verifications were typically confirmed by NASA. The traditional Safety and Mission Assurance (SMA) model does not support the nature of the Commercial Crew Program. To that end, NASA SMA is implementing a Risk Based Assurance (RBA) process to determine which hazard control verifications require NASA authentication. Additionally, a Shared Assurance Model is also being developed to efficiently use the available resources to execute the verifications. This paper will describe the evolution of the CCP Mission Assurance process from the beginning of the Program to its current incarnation. Topics to be covered include a short history of the CCP; the development of the Programmatic mission assurance requirements; the current safety review process; a description of the RBA process and its products and ending with a description of the Shared Assurance Model.
A Tri-network Model of Human Semantic Processing
Xu, Yangwen; He, Yong; Bi, Yanchao
2017-01-01
Humans process the meaning of the world via both verbal and nonverbal modalities. It has been established that widely distributed cortical regions are involved in semantic processing, yet the global wiring pattern of this brain system has not been considered in the current neurocognitive semantic models. We review evidence from the brain-network perspective, which shows that the semantic system is topologically segregated into three brain modules. Revisiting previous region-based evidence in light of these new network findings, we postulate that these three modules support multimodal experiential representation, language-supported representation, and semantic control. A tri-network neurocognitive model of semantic processing is proposed, which generates new hypotheses regarding the network basis of different types of semantic processes. PMID:28955266
Introducing GHOST: The Geospace/Heliosphere Observation & Simulation Tool-kit
NASA Astrophysics Data System (ADS)
Murphy, J. J.; Elkington, S. R.; Schmitt, P.; Wiltberger, M. J.; Baker, D. N.
2013-12-01
Simulation models of the heliospheric and geospace environments can provide key insights into the geoeffective potential of solar disturbances such as Coronal Mass Ejections and High Speed Solar Wind Streams. Advanced post processing of the results of these simulations greatly enhances the utility of these models for scientists and other researchers. Currently, no supported centralized tool exists for performing these processing tasks. With GHOST, we introduce a toolkit for the ParaView visualization environment that provides a centralized suite of tools suited for Space Physics post processing. Building on the work from the Center For Integrated Space Weather Modeling (CISM) Knowledge Transfer group, GHOST is an open-source tool suite for ParaView. The tool-kit plugin currently provides tools for reading LFM and Enlil data sets, and provides automated tools for data comparison with NASA's CDAweb database. As work progresses, many additional tools will be added and through open-source collaboration, we hope to add readers for additional model types, as well as any additional tools deemed necessary by the scientific public. The ultimate end goal of this work is to provide a complete Sun-to-Earth model analysis toolset.
NASA Astrophysics Data System (ADS)
Zhang, Daojie; Nastac, Laurentiu
2016-12-01
In present study, 6061- and A356-based nano-composites are fabricated by using the ultrasonic stirring technology (UST) in a coreless induction furnace. SiC nanoparticles are used as the reinforcement. Nanoparticles are added into the molten metal and then dispersed by ultrasonic cavitation and acoustic streaming assisted by electromagnetic stirring. The applied UST parameters in the current experiments are used to validate a recently developed magneto-hydro-dynamics (MHD) model, which is capable of modeling the cavitation and nanoparticle dispersion during UST processing. The MHD model accounts for turbulent fluid flow, heat transfer and solidification, and electromagnetic field, as well as the complex interaction between the nanoparticles and both the molten and solidified alloys by using ANSYS Maxwell and ANSYS Fluent. Molecular dynamics (MD) simulations are conducted to analyze the complex interactions between the nanoparticle and the liquid/solid interface. The current modeling results demonstrate that a strong flow can disperse the nanoparticles relatively well during molten metal and solidification processes. MD simulation results prove that ultrafine particles (10 nm) will be engulfed by the solidification front instead of being pushed, which is beneficial for nano-dispersion.
Oxygen production System Models for Lunar ISRU
NASA Technical Reports Server (NTRS)
Santiago-Maldonado, Edgardo
2007-01-01
In-Situ Resource Utilization (ISRU) seeks to make human space exploration feasible; by using available resources from a planet or the moon to produce consumables, parts, and structures that otherwise would be brought from Earth. Producing these in situ reduces the mass of such that must be launched and doing so allows more payload mass' for each mission. The production of oxygen from lunar regolith, for life support and propellant, is one of the tasks being studied under ISRU. NASA is currently funding three processes that have shown technical merit for the production of oxygen from regolith: Molten Salt Electrolysis, Hydrogen Reduction of Ilmenite, and Carbothermal Reduction. The ISRU program is currently developing system models of, the , abovementioned processes to: (1) help NASA in the evaluation process to select the most cost-effective and efficient process for further prototype development, (2) identify key parameters, (3) optimize the oxygen production process, (4) provide estimates on energy and power requirements, mass and volume.of the system, oxygen production rate, mass of regolith required, mass of consumables, and other important parameters, and (5) integrate into the overall end-to-end ISRU system model, which could be integrated with mission architecture models. The oxygen production system model is divided into modules that represent unit operations (e.g., reactor, water electrolyzer, heat exchanger). Each module is modeled theoretically using Excel and Visual Basic for Applications (VBA), and will be validated using experimental data from on-going laboratory work. This modularity (plug-n-play) feature of each unit operation allows the use of the same model on different oxygen production systems simulations resulting in comparable results. In this presentation, preliminary results for mass, power, volume will be presented along with brief description of the oxygen production system model.
NASA Astrophysics Data System (ADS)
Kapitan, Loginn
This research created a new model which provides an integrated approach to planning the effective selection and employment of airborne sensor systems in response to accidental or intentional chemical vapor releases. The approach taken was to use systems engineering and decision analysis methods to construct a model architecture which produced a modular structure for integrating both new and existing components into a logical procedure to assess the application of airborne sensor systems to address chemical vapor hazards. The resulting integrated process model includes an internal aggregation model which allowed differentiation among alternative airborne sensor systems. Both models were developed and validated by experts and demonstrated using appropriate hazardous chemical release scenarios. The resultant prototype integrated process model or system fills a current gap in capability allowing improved planning, training and exercise for HAZMAT teams and first responders when considering the selection and employment of airborne sensor systems. Through the research process, insights into the current response structure and how current airborne capability may be most effectively used were generated. Furthermore, the resultant prototype system is tailorable for local, state, and federal application, and can potentially be modified to help evaluate investments in new airborne sensor technology and systems. Better planning, training and preparedness exercising holds the prospect for the effective application of airborne assets for improved response to large scale chemical release incidents. Improved response will result in fewer casualties and lives lost, reduced economic impact, and increased protection of critical infrastructure when faced with accidental and intentional terrorist release of hazardous industrial chemicals. With the prospect of more airborne sensor systems becoming available, this prototype system integrates existing and new tools into an effective process for the selection and employment of airborne sensors to better plan, train and exercise ahead of potential chemical release events.
Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B
2007-03-01
The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.
A Lumped Computational Model for Sodium Sulfur Battery Analysis
NASA Astrophysics Data System (ADS)
Wu, Fan
Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.
Development of a New VLBI Data Analysis Software
NASA Technical Reports Server (NTRS)
Bolotin, Sergei; Gipson, John M.; MacMillan, Daniel S.
2010-01-01
We present an overview of a new VLBI analysis software under development at NASA GSFC. The new software will replace CALC/SOLVE and many related utility programs. It will have the capabilities of the current system as well as incorporate new models and data analysis techniques. In this paper we give a conceptual overview of the new software. We formulate the main goals of the software. The software should be flexible and modular to implement models and estimation techniques that currently exist or will appear in future. On the other hand it should be reliable and possess production quality for processing standard VLBI sessions. Also, it needs to be capable of processing observations from a fully deployed network of VLBI2010 stations in a reasonable time. We describe the software development process and outline the software architecture.
NASA Astrophysics Data System (ADS)
Buzulukova, Natalia; Fok, Mei-Ching; Glocer, Alex; Moore, Thomas E.
2013-04-01
We report studies of the storm time ring current and its influence on the radiation belts, plasmasphere and global magnetospheric dynamics. The near-Earth space environment is described by multiscale physics that reflects a variety of processes and conditions that occur in magnetospheric plasma. For a successful description of such a plasma, a complex solution is needed which allows multiple physics domains to be described using multiple physical models. A key population of the inner magnetosphere is ring current plasma. Ring current dynamics affects magnetic and electric fields in the entire magnetosphere, the distribution of cold ionospheric plasma (plasmasphere), and radiation belts particles. To study electrodynamics of the inner magnetosphere, we present a MHD model (BATSRUS code) coupled with ionospheric solver for electric field and with ring current-radiation belt model (CIMI code). The model will be used as a tool to reveal details of coupling between different regions of the Earth's magnetosphere. A model validation will be also presented based on comparison with data from THEMIS, POLAR, GOES, and TWINS missions. INVITED TALK
USAF Logistics Process Optimization Study for the Aircraft Asset Sustainment Process. Volume 1.
1998-12-31
solely to have a record that could be matched with the CMOS receipt data. (This problem is caused by DLA systems that currently do not populate CMOS with...unable to obtain passwords to the Depot D035 systems. Figure 16 shows daily savings as of 30 September 1998 (current time frame ) and projects savings...Engineering, modeling, and systems/software development company LAN Local Area Network LFA Large Frame Aircraft LMA Logistics Management Agency LMR
Thinking outside the channel: modeling nitrogen cycling in networked river ecosystems
Ashley M. Helton; Geoffrey C. Poole; Judy L. Meyer; Wilfred M. Wollheim; Bruce J. Peterson; Patrick J. Mulholland; Emily S. Bernhardt; Jack A. Stanford; Clay Arango; Linda R. Ashkenas; Lee W. Cooper; Walter K. Dodds; Stanley V. Gregory; Robert O. Hall; Stephen K. Hamilton; Sherri L. Johnson; William H. McDowell; Jody D. Potter; Jennifer L. Tank; Suzanne M. Thomas; H. Maurice Valett; Jackson R. Webster; Lydia Zeglin
2011-01-01
Agricultural and urban development alters nitrogen and other biogeochemical cycles in rivers worldwide. Because such biogeochemical processes cannot be measured empirically across whole river networks, simulation models are critical tools for understanding river-network biogeochemistry. However, limitations inherent in current models restrict our ability to simulate...
A Stable Clock Error Model Using Coupled First and Second Order Gauss-Markov Processes
NASA Technical Reports Server (NTRS)
Carpenter, Russell; Lee, Taesul
2008-01-01
Long data outages may occur in applications of global navigation satellite system technology to orbit determination for missions that spend significant fractions of their orbits above the navigation satellite constellation(s). Current clock error models based on the random walk idealization may not be suitable in these circumstances, since the covariance of the clock errors may become large enough to overflow flight computer arithmetic. A model that is stable, but which approximates the existing models over short time horizons is desirable. A coupled first- and second-order Gauss-Markov process is such a model.
Statistical error model for a solar electric propulsion thrust subsystem
NASA Technical Reports Server (NTRS)
Bantell, M. H.
1973-01-01
The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.
Self-consistent radiation-based simulation of electric arcs: II. Application to gas circuit breakers
NASA Astrophysics Data System (ADS)
Iordanidis, A. A.; Franck, C. M.
2008-07-01
An accurate and robust method for radiative heat transfer simulation for arc applications was presented in the previous paper (part I). In this paper a self-consistent mathematical model based on computational fluid dynamics and a rigorous radiative heat transfer model is described. The model is applied to simulate switching arcs in high voltage gas circuit breakers. The accuracy of the model is proven by comparison with experimental data for all arc modes. The ablation-controlled arc model is used to simulate high current PTFE arcs burning in cylindrical tubes. Model accuracy for the lower current arcs is evaluated using experimental data on the axially blown SF6 arc in steady state and arc resistance measurements close to current zero. The complete switching process with the arc going through all three phases is also simulated and compared with the experimental data from an industrial circuit breaker switching test.
Formal Definition of Measures for BPMN Models
NASA Astrophysics Data System (ADS)
Reynoso, Luis; Rolón, Elvira; Genero, Marcela; García, Félix; Ruiz, Francisco; Piattini, Mario
Business process models are currently attaining more relevance, and more attention is therefore being paid to their quality. This situation led us to define a set of measures for the understandability of BPMN models, which is shown in a previous work. We focus on understandability since a model must be well understood before any changes are made to it. These measures were originally informally defined in natural language. As is well known, natural language is ambiguous and may lead to misunderstandings and a misinterpretation of the concepts captured by a measure and the way in which the measure value is obtained. This has motivated us to provide the formal definition of the proposed measures using OCL (Object Constraint Language) upon the BPMN (Business Process Modeling Notation) metamodel presented in this paper. The main advantages and lessons learned (which were obtained both from the current work and from previous works carried out in relation to the formal definition of other measures) are also summarized.
The design of multi-core DSP parallel model based on message passing and multi-level pipeline
NASA Astrophysics Data System (ADS)
Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong
2017-10-01
Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.
The influence of decadal scale climactic events on the transport of larvae.
NASA Astrophysics Data System (ADS)
Rasmuson, L. K.; Edwards, C. A.; Shanks, A.
2016-02-01
Understanding the processes that influence larval transport remains an important, yet difficult, task. This is especially true as more studies demonstrate that biological and physical oceanographic processes vary at long (e.g. decadal+) time scales. We used individual based biophysical models to study transport of Dungeness crab larvae (the most economically valuable fishery on the West Coast of the Continental United States) over a 10-year period; during both positive and negative phases of the Pacific decadal oscillation (PDO). A physical oceanographic model of the California current was developed using the Regional Ocean Modeling System with 1/30-degree resolution. Measured and modeled PDO indices were positively correlated. The biological model was implemented using the Lagrangian Transport Model, and modified to incorporate temperature dependent development and stage specific behaviors. Super individuals were used to scale production and incorporate mortality. Models were validated using time series statistics to compare measured and modeled daily recruitment. More larvae recruited, in both our measured and modeled time series, during negative PDOs. Our work suggests larvae exhibit a vertically migratory behavior too or almost too the bottom each day. During positive PDO years larvae were competent to settle earlier than negative PDO years, however, pelagic larval durations did not differ. The southern end of the population appears to be a sink population, which likely explains the decline in commercial catch. Ultimately, the population is much more demographically closed than previously thought. We hypothesize the stronger flow in the California current during negative PDO's enhances membership of larvae in the current. Further, migrating almost too the bottom causes larvae to enter the benthic boundary layer on the continental shelf and the California undercurrent on the continental slope, both, which decrease net alongshore advection. These factors result in a higher number of larvae closing their larval phase within the California current. We hypothesize Dungeness crabs have evolved to complete their larval phase within the oceanographic context of the California current and differences with the oceanography in the Alaska current may explain the difficulties in managing fisheries.
Improving operational anodising process performance using simulation approach
NASA Astrophysics Data System (ADS)
Liong, Choong-Yeun; Ghazali, Syarah Syahidah
2015-10-01
The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist of five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.
Modeling of the Electric Characteristics of Solar Cells
NASA Astrophysics Data System (ADS)
Logan, Benjamin; Tzolov, Marian
The purpose of a solar cell is to covert solar energy, through means of photovoltaic action, into a sustainable electrical current that produces usable electricity. The electrical characteristics of solar cells can be modeled to better understand how they function. As an electrical device, solar cells can be conveniently represented as an equivalent electrical circuit with an ideal diode, ideal current source for the photovoltaic action, a shunt resistor for recombination, a resistor in series to account for contact resistance, and a resistor modeling external power consumption. The values of these elements have been modified to model dark and illumination states. Fitting the model to the experimental current voltage characteristics allows to determine the values of the equivalent circuit elements. Comparing values of open circuit voltage, short circuit current, and shunt resistor can determine factors such as the amount of recombination to diagnose problems in solar cells. The many measurable quantities of a solar cell's characteristics give guidance for the design when they are related with microscopic processes.
NASA Astrophysics Data System (ADS)
Pappas, C.
2017-12-01
Terrestrial ecosystem processes respond differently to hydrometeorological variability across timescales, and so does our scientific understanding of the underlying mechanisms. Process-based modeling of ecosystem functioning is therefore challenging, especially when long-term predictions are envisioned. Here we analyze the statistical properties of hydrometeorological and ecosystem variability, i.e., the variability of ecosystem process related to vegetation carbon dynamics, from hourly to decadal timescales. 23 extra-tropical forest sites, covering different climatic zones and vegetation characteristics, are examined. Micrometeorological and reanalysis data of precipitation, air temperature, shortwave radiation and vapor pressure deficit are used to describe hydrometeorological variability. Ecosystem variability is quantified using long-term eddy covariance flux data of hourly net ecosystem exchange of CO2 between land surface and atmosphere, monthly remote sensing vegetation indices, annual tree-ring widths and above-ground biomass increment estimates. We find that across sites and timescales ecosystem variability is confined within a hydrometeorological envelope that describes the range of variability of the available resources, i.e., water and energy. Furthermore, ecosystem variability demonstrates long-term persistence, highlighting ecological memory and slow ecosystem recovery rates after disturbances. We derive an analytical model, combining deterministic harmonics and stochastic processes, that represents major mechanisms and uncertainties and mimics the observed pattern of hydrometeorological and ecosystem variability. This stochastic framework offers a parsimonious and mathematically tractable approach for modelling ecosystem functioning and for understanding its response and resilience to environmental changes. Furthermore, this framework reflects well the observed ecological memory, an inherent property of ecosystem functioning that is currently not captured by simulation results with process-based models. Our analysis offers a perspective for terrestrial ecosystem modelling, combining current process understanding with stochastic methods, and paves the way for new model-data integration opportunities in Earth system sciences.
Analysis of Co-Tunneling Current in Fullerene Single-Electron Transistor
NASA Astrophysics Data System (ADS)
KhademHosseini, Vahideh; Dideban, Daryoosh; Ahmadi, MohammadTaghi; Ismail, Razali
2018-05-01
Single-electron transistors (SETs) are nano devices which can be used in low-power electronic systems. They operate based on coulomb blockade effect. This phenomenon controls single-electron tunneling and it switches the current in SET. On the other hand, co-tunneling process increases leakage current, so it reduces main current and reliability of SET. Due to co-tunneling phenomenon, main characteristics of fullerene SET with multiple islands are modelled in this research. Its performance is compared with silicon SET and consequently, research result reports that fullerene SET has lower leakage current and higher reliability than silicon counterpart. Based on the presented model, lower co-tunneling current is achieved by selection of fullerene as SET island material which leads to smaller value of the leakage current. Moreover, island length and the number of islands can affect on co-tunneling and then they tune the current flow in SET.
NASA Astrophysics Data System (ADS)
Dlesk, A.; Raeva, P.; Vach, K.
2018-05-01
Processing of analog photogrammetric negatives using current methods brings new challenges and possibilities, for example, creation of a 3D model from archival images which enables the comparison of historical state and current state of cultural heritage objects. The main purpose of this paper is to present possibilities of processing archival analog images captured by photogrammetric camera Rollei 6006 metric. In 1994, the Czech company EuroGV s.r.o. carried out photogrammetric measurements of former limestone quarry the Great America located in the Central Bohemian Region in the Czech Republic. All the negatives of photogrammetric images, complete documentation, coordinates of geodetically measured ground control points, calibration reports and external orientation of images calculated in the Combined Adjustment Program are preserved and were available for the current processing. Negatives of images were scanned and processed using structure from motion method (SfM). The result of the research is a statement of what accuracy is possible to expect from the proposed methodology using Rollei metric images originally obtained for terrestrial intersection photogrammetry while adhering to the proposed methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutqvist, Jonny; Blanco Martin, Laura; Mukhopadhyay, Sumit
In this report, we present FY2014 progress by Lawrence Berkeley National Laboratory (LBNL) related to modeling of coupled thermal-hydrological-mechanical-chemical (THMC) processes in salt and their effect on brine migration at high temperatures. LBNL’s work on the modeling of coupled THMC processes in salt was initiated in FY2012, focusing on exploring and demonstrating the capabilities of an existing LBNL modeling tool (TOUGH-FLAC) for simulating temperature-driven coupled flow and geomechanical processes in salt. This work includes development related to, and implementation of, essential capabilities, as well as testing the model against relevant information and published experimental data related to the fate andmore » transport of water. we provide more details on the FY2014 work, first presenting updated tools and improvements made to the TOUGH-FLAC simulator, and the use of this updated tool in a new model simulation of long-term THM behavior within a generic repository in a salt formation. This is followed by the description of current benchmarking and validations efforts, including the TSDE experiment. We then present the current status in the development of constitutive relationships and the dual-continuum model for brine migration. We conclude with an outlook for FY2015, which will be much focused on model validation against field experiments and on the use of the model for the design studies related to a proposed heater experiment.« less
Wind Sensing, Analysis, and Modeling
NASA Technical Reports Server (NTRS)
Corvin, Michael A.
1995-01-01
The purpose of this task was to begin development of a unified approach to the sensing, analysis, and modeling of the wind environments in which launch system operate. The initial activity was to examine the current usage and requirements for wind modeling for the Titan 4 vehicle. This was to be followed by joint technical efforts with NASA Langley Research Center to develop applicable analysis methods. This work was to be performed in and demonstrate the use of prototype tools implementing an environment in which to realize a unified system. At the convenience of the customer, due to resource limitations, the task was descoped. The survey of Titan 4 processes was accomplished and is reported in this document. A summary of general requirements is provided. Current versions of prototype Process Management Environment tools are being provided to the customer.
Wind sensing, analysis, and modeling
NASA Technical Reports Server (NTRS)
Corvin, Michael A.
1995-01-01
The purpose of this task was to begin development of a unified approach to the sensing, analysis, and modeling of the wind environments in which launch systems operate. The initial activity was to examine the current usage and requirements for wind modeling for the Titan 4 vehicle. This was to be followed by joint technical efforts with NASA Langley Research Center to develop applicable analysis methods. This work was to be performed in and demonstrate the use of prototype tools implementing an environment in which to realize a unified system. At the convenience of the customer, due to resource limitations, the task was descoped. The survey of Titan 4 processes was accomplished and is reported in this document. A summary of general requirements is provided . Current versions of prototype Process Management Environment tools are being provided to the customer.
A Review of Current Investigations of Urban-Induced Rainfall and Recommendations for the Future
NASA Technical Reports Server (NTRS)
Shepherd, J. Marshall
2004-01-01
Precipitation is a key link in the global water cycle and a proxy for changing climate; therefore proper assessment of the urban environment s impact on precipitation (land use, aerosols, thermal properties) will be increasingly important in ongoing climate diagnostics and prediction, Global Water and Energy Cycle (GWEC) analysis and modeling, weather forecasting, freshwater resource management, urban planning-design and land-atmosphere-ocean interface processes. These facts are particularly critical if current projections for global urban growth are accurate. The goal of this paper is to provide a concise review of recent (1990-present) studies related to how the urban environment affects precipitation. In addition to providing a synopsis of current work, recent findings are placed in context with historical investigations such as METROMEX studies. Both observational and modeling studies of urban-induced rainfall are discussed. Additionally, a discussion of the relative roles of urban dynamic and microphysical (e.g. aerosol) processes is presented. The paper closes with a set of recommendations for what observations and capabilities are needed in the future to advance our understanding of the processes.
Guidance: Strategies to Achieve Timely Settlement and Implementation of RD/RA at Superfund Sites
Memorandum recommends strategies to encourage PRPs to enter into a settlement using the model RD/RA Consent Decree; discusses the current model UAO; and suggests practical alternatives to expedite Superfund settlements and the cleanup process.
An update on pharmaceutical film coating for drug delivery.
Felton, Linda A; Porter, Stuart C
2013-04-01
Pharmaceutical coating processes have generally been transformed from what was essentially an art form in the mid-twentieth century to a much more technology-driven process. This review article provides a basic overview of current film coating processes, including a discussion on polymer selection, coating formulation additives and processing equipment. Substrate considerations for pharmaceutical coating processes are also presented. While polymeric coating operations are commonplace in the pharmaceutical industry, film coating processes are still not fully understood, which presents serious challenges with current regulatory requirements. Novel analytical technologies and various modeling techniques that are being used to better understand film coating processes are discussed. This review article also examines the challenges of implementing process analytical technologies in coating operations, active pharmaceutical ingredients in polymer film coatings, the use of high-solids coating systems and continuous coating and other novel coating application methods.
Mavritsaki, Eirini; Heinke, Dietmar; Humphreys, Glyn W; Deco, Gustavo
2006-01-01
In the real world, visual information is selected over time as well as space, when we prioritise new stimuli for attention. Watson and Humphreys [Watson, D., Humphreys, G.W., 1997. Visual marking: prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review 104, 90-122] presented evidence that new information in search tasks is prioritised by (amongst other processes) active ignoring of old items - a process they termed visual marking. In this paper we present, for the first time, an explicit computational model of visual marking using biologically plausible activation functions. The "spiking search over time and space" model (sSoTS) incorporates different synaptic components (NMDA, AMPA, GABA) and a frequency adaptation mechanism based on [Ca(2+)] sensitive K(+) current. This frequency adaptation current can act as a mechanism that suppresses the previously attended items. We show that, when coupled with a process of active inhibition applied to old items, frequency adaptation leads to old items being de-prioritised (and new items prioritised) across time in search. Furthermore, the time course of these processes mimics the time course of the preview effect in human search. The results indicate that the sSoTS model can provide a biologically plausible account of human search over time as well as space.
Jiang, Dong; Hao, Mengmeng; Wang, Qiao; Huang, Yaohuan; Fu, Xinyu
2014-01-01
The main purpose for developing biofuel is to reduce GHG (greenhouse gas) emissions, but the comprehensive environmental impact of such fuels is not clear. Life cycle analysis (LCA), as a complete comprehensive analysis method, has been widely used in bioenergy assessment studies. Great efforts have been directed toward establishing an efficient method for comprehensively estimating the greenhouse gas (GHG) emission reduction potential from the large-scale cultivation of energy plants by combining LCA with ecosystem/biogeochemical process models. LCA presents a general framework for evaluating the energy consumption and GHG emission from energy crop planting, yield acquisition, production, product use, and postprocessing. Meanwhile, ecosystem/biogeochemical process models are adopted to simulate the fluxes and storage of energy, water, carbon, and nitrogen in the soil-plant (energy crops) soil continuum. Although clear progress has been made in recent years, some problems still exist in current studies and should be addressed. This paper reviews the state-of-the-art method for estimating GHG emission reduction through developing energy crops and introduces in detail a new approach for assessing GHG emission reduction by combining LCA with biogeochemical process models. The main achievements of this study along with the problems in current studies are described and discussed. PMID:25045736
Context-dependent decision-making: a simple Bayesian model
Lloyd, Kevin; Leslie, David S.
2013-01-01
Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or ‘contexts’ allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects. PMID:23427101
Context-dependent decision-making: a simple Bayesian model.
Lloyd, Kevin; Leslie, David S
2013-05-06
Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or 'contexts' allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects.
Vicente, Tiago; Mota, José P B; Peixoto, Cristina; Alves, Paula M; Carrondo, Manuel J T
2011-01-01
The advent of advanced therapies in the pharmaceutical industry has moved the spotlight into virus-like particles and viral vectors produced in cell culture holding great promise in a myriad of clinical targets, including cancer prophylaxis and treatment. Even though a couple of cases have reached the clinic, these products have yet to overcome a number of biological and technological challenges before broad utilization. Concerning the manufacturing processes, there is significant research focusing on the optimization of current cell culture systems and, more recently, on developing scalable downstream processes to generate material for pre-clinical and clinical trials. We review the current options for downstream processing of these complex biopharmaceuticals and underline current advances on knowledge-based toolboxes proposed for rational optimization of their processing. Rational tools developed to increase the yet scarce knowledge on the purification processes of complex biologicals are discussed as alternative to empirical, "black-boxed" based strategies classically used for process development. Innovative methodologies based on surface plasmon resonance, dynamic light scattering, scale-down high-throughput screening and mathematical modeling for supporting ion-exchange chromatography show great potential for a more efficient and cost-effective process design, optimization and equipment prototyping. Copyright © 2011 Elsevier Inc. All rights reserved.
Better Instructional Design Theory: Process Improvement or Reengineering?
ERIC Educational Resources Information Center
Dick, Walter
1997-01-01
Discusses three ways that instructional design theories can change over time: (1) revision via evolution of models to reflect the outcomes that are being achieved with its current use; (2) revision to reflect current understanding of technology; and (3) complete replacement of present theory with another more powerful theory. Describes the…
USDA-ARS?s Scientific Manuscript database
Current state of agricultural lands is defined under influence of processes in soil, plants and atmosphere and is described by observation data, complicated models and subjective opinion of experts. Problem-oriented indicators summarize this information in useful form for decision of the same specif...
Multiphysics and Multiscale Model Coupling Using Gerris
NASA Astrophysics Data System (ADS)
Keen, T. R.; Dykes, J. D.; Campbell, T. J.
2012-12-01
This work is implementing oceanographic processes encompassing multiple physics and scales using the Gerris Flow Solver (GFS) in order to examine their interdependence and sensitivity to changes in the physical environment. The processes include steady flow due to tides and the wind, phase-averaged wave-forced flow and oscillatory currents, and sediment transport. The 2D steady flow is calculated by the Ocean module contained within GFS. This model solves the Navier-Stokes (N-S) equations using the finite volume method. The model domain is represented by quad-tree adaptive mesh refinement (AMR). A stationary wave field is computed for a specified wave spectrum is uniformly distributed over the domain as a tracer with local wind input parameterized as a source, and dissipation by friction and breaking as a sink. Alongshore flow is included by a radiation stress term; this current is added to the steady flow component from tides and wind. Wave-current interaction is parameterized using a bottom boundary layer model. Sediment transport as suspended and bed load is implemented using tracers that are transported via the advection equations. A bed-conservation equation is implemented to allow changes in seafloor elevation to be used in adjusting the AMR refinement. These processes are being coupled using programming methods that are inherent to GFS and that do not require modification or recompiling of the code. These techniques include passive tracers, C functions that operate as plug-ins, and user-defined C-type macros included with GFS. Our results suggest that the AMR model coupling method is useful for problems where the dynamics are governed by several processes. This study is examining the relative influence of the steady currents, wave field, and sedimentation. Hydrodynamic and sedimentation interaction in nearshore environments is being studied for an idealized beach and for the Sandy Duck storm of Oct. 1998. The potential behavior of muddy sediments on the inner shelf is being evaluated for cold fronts near Atchafalaya Bay in the Gulf of Mexico. Due to the complexity of the model output results, fields are loaded into ArcMAP, a GIS-based application developed by Environmental Systems Research Institute (ESRI), with additional software that facilitates analysis of the results and assessment of model performance. GFS provides output with sufficient georeferencing information to be suitable for nearly seamless ingestion by ArcMAP. Analysis tools include comparisons between data layers; these may be intra-model, inter-model, or model-observation data. The comparisons become new data layers with additional parameters such as enhancements curves, time series, and statistics.
A first packet processing subdomain cluster model based on SDN
NASA Astrophysics Data System (ADS)
Chen, Mingyong; Wu, Weimin
2017-08-01
For the current controller cluster packet processing performance bottlenecks and controller downtime problems. An SDN controller is proposed to allocate the priority of each device in the SDN (Software Defined Network) network, and the domain contains several network devices and Controller, the controller is responsible for managing the network equipment within the domain, the switch performs data delivery based on the load of the controller, processing network equipment data. The experimental results show that the model can effectively solve the risk of single point failure of the controller, and can solve the performance bottleneck of the first packet processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langton, C.; Kosson, D.
2009-11-30
Cementitious barriers for nuclear applications are one of the primary controls for preventing or limiting radionuclide release into the environment. At the present time, performance and risk assessments do not fully incorporate the effectiveness of engineered barriers because the processes that influence performance are coupled and complicated. Better understanding the behavior of cementitious barriers is necessary to evaluate and improve the design of materials and structures used for radioactive waste containment, life extension of current nuclear facilities, and design of future nuclear facilities, including those needed for nuclear fuel storage and processing, nuclear power production and waste management. The focusmore » of the Cementitious Barriers Partnership (CBP) literature review is to document the current level of knowledge with respect to: (1) mechanisms and processes that directly influence the performance of cementitious materials (2) methodologies for modeling the performance of these mechanisms and processes and (3) approaches to addressing and quantifying uncertainties associated with performance predictions. This will serve as an important reference document for the professional community responsible for the design and performance assessment of cementitious materials in nuclear applications. This review also provides a multi-disciplinary foundation for identification, research, development and demonstration of improvements in conceptual understanding, measurements and performance modeling that would be lead to significant reductions in the uncertainties and improved confidence in the estimating the long-term performance of cementitious materials in nuclear applications. This report identifies: (1) technology gaps that may be filled by the CBP project and also (2) information and computational methods that are in currently being applied in related fields but have not yet been incorporated into performance assessments of cementitious barriers. The various chapters contain both a description of the mechanism or and a discussion of the current approaches to modeling the phenomena.« less
Regulatory ozone modeling: status, directions, and research needs.
Georgopoulos, P G
1995-01-01
The Clean Air Act Amendments (CAAA) of 1990 have established selected comprehensive, three-dimensional, Photochemical Air Quality Simulation Models (PAQSMs) as the required regulatory tools for analyzing the urban and regional problem of high ambient ozone levels across the United States. These models are currently applied to study and establish strategies for meeting the National Ambient Air Quality Standard (NAAQS) for ozone in nonattainment areas; State Implementation Plans (SIPs) resulting from these efforts must be submitted to the U.S. Environmental Protection Agency (U.S. EPA) in November 1994. The following presentation provides an overview and discussion of the regulatory ozone modeling process and its implications. First, the PAQSM-based ozone attainment demonstration process is summarized in the framework of the 1994 SIPs. Then, following a brief overview of the representation of physical and chemical processes in PAQSMs, the essential attributes of standard modeling systems currently in regulatory use are presented in a nonmathematical, self-contained format, intended to provide a basic understanding of both model capabilities and limitations. The types of air quality, emission, and meteorological data needed for applying and evaluating PAQSMs are discussed, as well as the sources, availability, and limitations of existing databases. The issue of evaluating a model's performance in order to accept it as a tool for policy making is discussed, and various methodologies for implementing this objective are summarized. Selected interim results from diagnostic analyses, which are performed as a component of the regulatory ozone modeling process for the Philadelphia-New Jersey region, are also presented to provide some specific examples related to the general issues discussed in this work. Finally, research needs related to a) the evaluation and refinement of regulatory ozone modeling, b) the characterization of uncertainty in photochemical modeling, and c) the improvement of the model-based ozone-attainment demonstration process are presented to identify future directions in this area. Images Figure 7. Figure 7. Figure 7. Figure 8. Figure 9. PMID:7614934
Carotene Degradation and Isomerization during Thermal Processing: A Review on the Kinetic Aspects.
Colle, Ines J P; Lemmens, Lien; Knockaert, Griet; Van Loey, Ann; Hendrickx, Marc
2016-08-17
Kinetic models are important tools for process design and optimization to balance desired and undesired reactions taking place in complex food systems during food processing and preservation. This review covers the state of the art on kinetic models available to describe heat-induced conversion of carotenoids, in particular lycopene and β-carotene. First, relevant properties of these carotenoids are discussed. Second, some general aspects of kinetic modeling are introduced, including both empirical single-response modeling and mechanism-based multi-response modeling. The merits of multi-response modeling to simultaneously describe carotene degradation and isomerization are demonstrated. The future challenge in this research field lies in the extension of the current multi-response models to better approach the real reaction pathway and in the integration of kinetic models with mass transfer models in case of reaction in multi-phase food systems.
Ion Electrodiffusion Governs Silk Electrogelation.
Kojic, Nikola; Panzer, Matthew J; Leisk, Gary G; Raja, Waseem K; Kojic, Milos; Kaplan, David L
2012-07-14
Silk electrogelation involves the transition of an aqueous silk fibroin solution to a gel state (E-gel) in the presence of an electric current. The process is based on local pH changes as a result of water electrolysis - generating H(+) and OH(-) ions at the (+) and (-) electrodes, respectively. Silk fibroin has a pI=4.2 and when local pH
Future Warming Patterns Linked to Today's Climate Variability.
Dai, Aiguo
2016-01-11
The reliability of model projections of greenhouse gas (GHG)-induced future climate change is often assessed based on models' ability to simulate the current climate, but there has been little evidence that connects the two. In fact, this practice has been questioned because the GHG-induced future climate change may involve additional physical processes that are not important for the current climate. Here I show that the spatial patterns of the GHG-induced future warming in the 21(st) century is highly correlated with the patterns of the year-to-year variations of surface air temperature for today's climate, with areas of larger variations during 1950-1979 having more GHG-induced warming in the 21(st) century in all climate models. Such a relationship also exists in other climate fields such as atmospheric water vapor, and it is evident in observed temperatures from 1950-2010. The results suggest that many physical processes may work similarly in producing the year-to-year climate variations in the current climate and the GHG-induced long-term changes in the 21(st) century in models and in the real world. They support the notion that models that simulate present-day climate variability better are likely to make more reliable predictions of future climate change.
NASA Astrophysics Data System (ADS)
Hirata, H.; Kawamura, R.; Kato, M.; Shinoda, T.
2014-12-01
We investigated how the moisture supply from the Kuroshio Current/Kuroshio Extension affects the rapid intensification of an explosive cyclone using a couple atmosphere-ocean non-hydrostatic model, CReSS-NHOES. The Cloud-Resolving Storm Simulator (CReSS) and the Non-Hydrostatic Ocean model for the Earth Simulator (NHOES) have been developed by the Hydrospheric Atmospheric Research Center of Nagoya University and the Japan Agency for Marine-Earth Science and Technology, respectively. We performed a numerical simulation of an extratropical cyclone migrating along the southern periphery of the Kuroshio Current on January 14, 2013, that developed most rapidly in recent years in the vicinity of Japan. The evolutions of surface fronts related to the cyclone simulated by the CReSS-NHOES closely resemble Shapiro-Keyser model. In the lower troposphere, the cyclone's bent-back front and the associated frontal T-bone structure become evident with the cyclone development. Cold Conveyor Belt (CCB) is also well organized over the northern part of the cyclone. During its developing stage, since the CCB dominates just over the Kuroshio Current/Kuroshio Extension, a large amount of moisture is efficiently supplied from the warm current into the CCB. The vapor evaporated from the underlying warm current is transported into the bent-back front by the CCB and converges horizontally in the vicinity of the front. As a result, strong diabatic heating arises over the corresponding moisture convergence area in that vicinity, indicating that the abundant moisture due to the warm current plays a vital role in rapid development of the cyclone through latent heat release processes. Both processes of the moisture transport from the warm current into the cyclone system via the CCB and of the latent heat release around the bent-back front are also confirmed by trajectory analyses. The rapid SLP decrease of the cyclone center can in turn increase the moisture supply from the warm current through enhancement of the CCB. We anticipate that such a feedback process plays a key role in the rapid intensification of the cyclone highlighted in this study.
Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, A.; Biegler, L.; Zitney, S.
2009-01-01
Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approachmore » to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.« less
Nitrogen cycling models and their application to forest harvesting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, D.W.; Dale, V.H.
1986-01-01
The characterization of forest nitrogen- (N-) cycling processes by several N-cycling models (FORCYTE, NITCOMP, FORTNITE, and LINKAGES) is briefly reviewed and evaluated against current knowledge of N cycling in forests. Some important processes (e.g., translocation within trees, N dynamics in decaying leaf litter) appear to be well characterized, whereas others (e.g., N mineralization from soil organic matter, N fixation, N dynamics in decaying wood, nitrification, and nitrate leaching) are poorly characterized, primarily because of a lack of knowledge rather than an oversight by model developers. It is remarkable how well the forest models do work in the absence of datamore » on some key processes. For those systems in which the poorly understood processes could cause major changes in N availability or productivity, the accuracy of model predictions should be examined. However, the development of N-cycling models represents a major step beyond the much simpler, classic conceptual models of forest nutrient cycling developed by early investigators. The new generation of computer models will surely improve as research reveals how key nutrient-cycling processes operate.« less
Microphysics, Radiation and Surface Processes in the Goddard Cumulus Ensemble (GCE) Model
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Simpson, J.; Baker, D.; Braun, S.; Chou, M.-D.; Ferrier, B.; Johnson, D.; Khain, A.; Lang, S.; Lynn, B.
2001-01-01
The response of cloud systems to their environment is an important link in a chain of processes responsible for monsoons, frontal depression, El Nino Southern Oscillation (ENSO) episodes and other climate variations (e.g., 30-60 day intra-seasonal oscillations). Numerical models of cloud properties provide essential insights into the interactions of clouds with each other, with their surroundings, and with land and ocean surfaces. Significant advances are currently being made in the modeling of rainfall and rain-related cloud processes, ranging in scales from the very small up to the simulation of an extensive population of raining cumulus clouds in a tropical- or midlatitude-storm environment. The Goddard Cumulus Ensemble (GCE) model is a multi-dimensional nonhydrostatic dynamic/microphysical cloud resolving model. It has been used to simulate many different mesoscale convective systems that occurred in various geographic locations. In this paper, recent GCE model improvements (microphysics, radiation and surface processes) will be described as well as their impact on the development of precipitation events from various geographic locations. The performance of these new physical processes will be examined by comparing the model results with observations. In addition, the explicit interactive processes between cloud, radiation and surface processes will be discussed.
A framework for testing and comparing binaural models.
Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M
2018-03-01
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.
Conditions and limitations on learning in the adaptive management of mallard harvests
Johnson, F.A.; Kendall, W.L.; Dubovsky, J.A.
2002-01-01
In 1995, the United States Fish and Wildlife Service adopted a protocol for the adaptive management of waterfowl hunting regulations (AHM) to help reduce uncertainty about the magnitude of sustainable harvests. To date, the AHM process has focused principally on the midcontinent population of mallards (Anas platyrhynchos), whose dynamics are described by 4 alternative models. Collectively, these models express uncertainty (or disagreement) about whether harvest is an additive or a compensatory form of mortality and whether the reproductive process is weakly or strongly density-dependent. Each model is associated with a probability or 'weight,' which describes its relative ability to predict changes in population size. These Bayesian probabilities are updated annually using a comparison of population size predicted under each model with that observed by a monitoring program. The current AHM process is passively adaptive, in the sense that there is no a priori consideration of how harvest decisions might affect discrimination among models. We contrast this approach with an actively adaptive approach, in which harvest decisions are used in part to produce the learning needed to increase long-term management performance. Our investigation suggests that the passive approach is expected to perform nearly as well as an optimal actively adaptive approach, particularly considering the nature of the model set, management objectives and constraints, and current regulatory alternatives. We offer some comments about the nature of the biological hypotheses being tested and describe some of the inherent limitations on learning in the AHM process.
Using dark current data to estimate AVIRIS noise covariance and improve spectral analyses
NASA Technical Reports Server (NTRS)
Boardman, Joseph W.
1995-01-01
Starting in 1994, all AVIRIS data distributions include a new product useful for quantification and modeling of the noise in the reported radiance data. The 'postcal' file contains approximately 100 lines of dark current data collected at the end of each data acquisition run. In essence this is a regular spectral-image cube, with 614 samples, 100 lines and 224 channels, collected with a closed shutter. Since there is no incident radiance signal, the recorded DN measure only the DC signal level and the noise in the system. Similar dark current measurements, made at the end of each line are used, with a 100 line moving average, to remove the DC signal offset. Therefore, the pixel-by-pixel fluctuations about the mean of this dark current image provide an excellent model for the additive noise that is present in AVIRIS reported radiance data. The 61,400 dark current spectra can be used to calculate the noise levels in each channel and the noise covariance matrix. Both of these noise parameters should be used to improve spectral processing techniques. Some processing techniques, such as spectral curve fitting, will benefit from a robust estimate of the channel-dependent noise levels. Other techniques, such as automated unmixing and classification, will be improved by the stable and scene-independence noise covariance estimate. Future imaging spectrometry systems should have a similar ability to record dark current data, permitting this noise characterization and modeling.
Can Steady Magnetospheric Convection Events Inject Plasma into the Ring Current?
NASA Astrophysics Data System (ADS)
Lemon, C.; Chen, M. W.; Guild, T. B.
2009-12-01
Steady Magnetospheric Convection (SMC) events are characterized by several-hour periods of enhanced convection that are devoid of substorm signatures. There has long been a debate about whether substorms are necessary to inject plasma into the ring current, or whether enhanced convection is sufficient. If ring current injections occur during SMC intervals, this would suggest that substorms are unnecessary. We use a combination of simulations and data observations to examine this topic. Our simulation model computes the energy-dependent plasma drift in a self-consistent electric and magnetic field, which allows us to accurately model the transport of plasma from the plasma sheet (where the plasma pressure is much larger than the magnetic pressure) into the inner magnetosphere (where plasma pressure is much less than the magnetic pressure). In regions where the two pressures are comparable (i.e. the inner plasma sheet), feedback between the plasma and magnetic field is critical for accurately modeling the physical evolution of the system. Our previous work has suggested that entropy losses in the plasma sheet (such as caused by substorms) may be necessary to inject a ring current. However, it is not yet clear whether other small-scale processes (e.g. bursty bulk flows) can provide sufficient entropy loss in the plasma sheet to allow for the penetration of plasma into the ring current. We combine our simulation results with data observations in order to better understand the physical processes required to inject a ring current.
The Therapeutic Process in Clinical Social Work.
ERIC Educational Resources Information Center
Siporin, Max
1983-01-01
Suggests that current outmoded and inadequate conceptions of the therapeutic process are a major obstacle to the advancement of clinical social work practice. Presents an integrative ecosystem model that expresses the distinctive social work concern with person, situation, and helping relationship, in their reciprocal psychodynamic and…
Origin of the main r-process elements
NASA Astrophysics Data System (ADS)
Otsuki, K.; Truran, J.; Wiescher, M.; Gorres, J.; Mathews, G.; Frekers, D.; Mengoni, A.; Bartlett, A.; Tostevin, J.
2006-07-01
The r-process is supposed to be a primary process which assembles heavy nuclei from a photo-dissociated nucleon gas. Hence, the reaction flow through light elements can be important as a constraint on the conditions for the r-process. We have studied the impact of di-neutron capture and the neutron-capture of light (Z<10) elements on r-process nucleosynthesis in three different environments: neutrino-driven winds in Type II supernovae; the prompt explosion of low mass supernovae; and neutron star mergers. Although the effect of di-neutron capture is not significant for the neutrino-driven wind model or low-mass supernovae, it becomes significant in the neutron-star merger model. The neutron-capture of light elements, which has been studied extensively for neutrino-driven wind models, also impacts the other two models. We show that it may be possible to identify the astrophysical site for the main r-process if the nuclear physics uncertainties in current r-process calculations could be reduced.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
ERIC Educational Resources Information Center
Davis, Tyler; Love, Bradley C.; Preston, Alison R.
2012-01-01
Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and…
Advanced process control framework initiative
NASA Astrophysics Data System (ADS)
Hill, Tom; Nettles, Steve
1997-01-01
The semiconductor industry, one the world's most fiercely competitive industries, is driven by increasingly complex process technologies and global competition to improve cycle time, quality, and process flexibility. Due to the complexity of these problems, current process control techniques are generally nonautomated, time-consuming, reactive, nonadaptive, and focused on individual fabrication tools and processes. As the semiconductor industry moves into higher density processes, radical new approaches are required. To address the need for advanced factory-level process control in this environment, Honeywell, Advanced Micro Devices (AMD), and SEMATECH formed the Advanced Process Control Framework Initiative (APCFI) joint research project. The project defines and demonstrates an Advanced Process Control (APC) approach based on SEMATECH's Computer Integrated Manufacturing (CIM) Framework. Its scope includes the coordination of Manufacturing Execution Systems, process control tools, and wafer fabrication equipment to provide necessary process control capabilities. Moreover, it takes advantage of the CIM Framework to integrate and coordinate applications from other suppliers that provide services necessary for the overall system to function. This presentation discusses the key concept of model-based process control that differentiates the APC Framework. This major improvement over current methods enables new systematic process control by linking the knowledge of key process settings to desired product characteristics that reside in models created with commercial model development tools The unique framework-based approach facilitates integration of commercial tools and reuse of their data by tying them together in an object-based structure. The presentation also explores the perspective of each organization's involvement in the APCFI project. Each has complementary goals and expertise to contribute; Honeywell represents the supplier viewpoint, AMD represents the user with 'real customer requirements', and SEMATECH provides a consensus-building organization that widely disseminates technology to suppliers and users in the semiconductor industry that face similar equipment and factory control systems challenges.
Test of electical resistivity and current diffusion modelling on MAST and JET
NASA Astrophysics Data System (ADS)
Keeling, D. L.; Challis, C. D.; Jenkins, I.; Hawkes, N. C.; Lupelli, I.; Michael, C.; de Bock, M. F. M.; the MAST Team; contributors, JET
2018-01-01
Experiments have been carried out on the MAST and JET tokamaks intended to compare the electrical resistivity of the plasma with theoretical formulations. The tests consist of obtaining motional stark effect (MSE) measurements in MHD-free plasmas during plasma current ramp-up (JET and MAST), ramp-down (MAST) and in stationary state (JET and MAST). Simulations of these plasmas are then performed in which the current profile evolution is calculated according to the poloidal field diffusion equation (PFDE) with classical or neoclassical resistivity. Synthetic MSE data are produced in the simulations for direct comparison with the experimental data. It is found that the toroidal current profile evolution modelled using neoclassical resistivity did not match the experimental observations on either device during current ramp-up or ramp-down as concluded from comparison of experimental and synthetic MSE profiles. In these phases, use of neoclassical resistivity in the modelling systematically overestimates the rate of current profile evolution. During the stationary state however, the modelled toroidal current profile matched experimental observations to a high degree of accuracy on both devices using neoclassical resistivity. Whilst no solution to the mismatch in the dynamic phases of the plasma is proposed, it is suggested that some physical process other than MHD which is not captured by the simple diffusive model of current profile evolution is responsible.
Three-Dimensional Finite-Element Simulation for a Thermoelectric Generator Module
NASA Astrophysics Data System (ADS)
Hu, Xiaokai; Takazawa, Hiroyuki; Nagase, Kazuo; Ohta, Michihiro; Yamamoto, Atsushi
2015-10-01
A three-dimensional closed-circuit numerical model of a thermoelectric generator (TEG) module has been constructed with COMSOL® Multiphysics to verify a module test system. The Seebeck, Peltier, and Thomson effects and Joule heating are included in the thermoelectric conversion model. The TEG model is employed to simulate the operation of a 16-leg TEG module based on bismuth telluride with temperature-dependent material properties. The module is mounted on a test platform, and simulated by combining the heat conduction process and thermoelectric conversion process. Simulation results are obtained for the terminal voltage, output power, heat flow, and efficiency as functions of the electric current; the results are compared with measurement data. The Joule and Thomson heats in all the thermoelectric legs, as functions of the electric current, are calculated by finite-element volume integration over the entire legs. The Peltier heat being pumped at the hot side and released at the cold side of the module are also presented in relation to the electric current. The energy balance relations between heat and electricity are verified to support the simulation.
NASA Technical Reports Server (NTRS)
Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara
2001-01-01
In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.
Dual processing model of medical decision-making
2012-01-01
Background Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. Methods We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. Results We show that physician’s beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker’s threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. Conclusions We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the large extent dominated by expected utility theory. The model also provides a platform for reconciling two groups of competing dual processing theories (parallel competitive with default-interventionalist theories). PMID:22943520
Developing the Mathematics Learning Management Model for Improving Creative Thinking in Thailand
ERIC Educational Resources Information Center
Sriwongchai, Arunee; Jantharajit, Nirat; Chookhampaeng, Sumalee
2015-01-01
The study purposes were: 1) To study current states and problems of relevant secondary students in developing mathematics learning management model for improving creative thinking, 2) To evaluate the effectiveness of model about: a) efficiency of learning process, b) comparisons of pretest and posttest on creative thinking and achievement of…
The Development of a Model of Culturally Responsive Science and Mathematics Teaching
ERIC Educational Resources Information Center
Hernandez, Cecilia M.; Morales, Amanda R.; Shroyer, M. Gail
2013-01-01
This qualitative theoretical study was conducted in response to the current need for an inclusive and comprehensive model to guide the preparation and assessment of teacher candidates for culturally responsive teaching. The process of developing a model of culturally responsive teaching involved three steps: a comprehensive review of the…
ERIC Educational Resources Information Center
Reid, Maurice; Brown, Steve; Tabibzadeh, Kambiz
2012-01-01
For the past decade teaching models have been changing, reflecting the dynamics, complexities, and uncertainties of today's organizations. The traditional and the more current active models of learning have disadvantages. Simulation provides a platform to combine the best aspects of both types of teaching practices. This research explores the…
An Activity Model for Scientific Inquiry
ERIC Educational Resources Information Center
Harwood, William
2004-01-01
Most people are frustrated with the current scientific method presented in textbooks. The scientific method--a simplistic model of the scientific inquiry process--fails in most cases to provide a successful guide to how science is done. This is not shocking, really. Many simple models used in science are quite useful within their limitations. When…
Using Dirichlet Processes for Modeling Heterogeneous Treatment Effects across Sites
ERIC Educational Resources Information Center
Miratrix, Luke; Feller, Avi; Pillai, Natesh; Pati, Debdeep
2016-01-01
Modeling the distribution of site level effects is an important problem, but it is also an incredibly difficult one. Current methods rely on distributional assumptions in multilevel models for estimation. There it is hoped that the partial pooling of site level estimates with overall estimates, designed to take into account individual variation as…
RECOGNIZING AND IDENTIFYING PEOPLE: A neuropsychological review
Barton, Jason J S; Corrow, Sherryse L
2016-01-01
Recognizing people is a classic example of a cognitive function that involves multiple processing stages and parallel routes of information. Neuropsychological data have provided important evidence for models of this process, particularly from case reports; however, the quality and extent of the data varies widely between studies. In this review we first discuss the requirements and logical basis of the types of neuropsychological evidence to support conclusions about the modules in this process. We then survey the adequacy of the current body of reports to address two key issues. First is the question of which cognitive operation generates a sense of familiarity: the current debate revolves around whether familiarity arises in modality-specific recognition units or later amodal processes. Key evidence on this point comes from the search for dissociations between familiarity for faces, voices and names. The second question is whether lesions can differentially affect the abilities to link diverse sources of person information (e.g. face, voice, name, biographic data). Dissociations of these linkages may favour a distributed-only model of the organization of semantic knowledge, whereas a ‘person-hub’ model would predict uniform impairments of all linkages. While we conclude that there is reasonable evidence for dissociations in name, voice and face familiarity in regards to the first question, the evidence for or against dissociated linkages between information stores in regards to the second is tenuous at best. We identify deficiencies in the current literature that should motivate and inform the design of future studies. PMID:26773237
Fuel ethanol production: process design trends and integration opportunities.
Cardona, Carlos A; Sánchez, Oscar J
2007-09-01
Current fuel ethanol research and development deals with process engineering trends for improving biotechnological production of ethanol. In this work, the key role that process design plays during the development of cost-effective technologies is recognized through the analysis of major trends in process synthesis, modeling, simulation and optimization related to ethanol production. Main directions in techno-economical evaluation of fuel ethanol processes are described as well as some prospecting configurations. The most promising alternatives for compensating ethanol production costs by the generation of valuable co-products are analyzed. Opportunities for integration of fuel ethanol production processes and their implications are underlined. Main ways of process intensification through reaction-reaction, reaction-separation and separation-separation processes are analyzed in the case of bioethanol production. Some examples of energy integration during ethanol production are also highlighted. Finally, some concluding considerations on current and future research tendencies in fuel ethanol production regarding process design and integration are presented.
USDA-ARS?s Scientific Manuscript database
There is a need to develop scale explicit understanding of erosion to overcome existing conceptual and methodological flaws in our modelling methods currently applied to understand the process of erosion, transport and deposition at the catchment scale. These models need to be based on a sound under...
Izaguirre, Eder; Lin, Tongyan; Shuve, Brian
2017-03-15
Here, we propose new searches for axion-like particles (ALPs) produced in flavor-changing neutral current (FCNC) processes. This proposal exploits the often-overlooked coupling of ALPs to W ± bosons, leading to FCNC production of ALPs even in the absence of a direct coupling to fermions. Our proposed searches for resonant ALP production in decays such as B→K(*)a, a→γγ, and K→πa, a→γγ could greatly improve upon the current sensitivity to ALP couplings to standard model particles. Finally, we also determine analogous constraints and discovery prospects for invisibly decaying ALPs.
NASA Astrophysics Data System (ADS)
Roughan, M.
2016-02-01
The East Australian Current (EAC) flows as a jet over the narrow shelf of southeastern Australia, dominating shelf circulation, and shedding vast eddies at the highly variable separation point. These characteristics alone make it a dynamically challenging region to measure, model and predict. In recent years a significant effort has been placed on understanding continental shelf processes along the coast of SE Australia, adjacent to the EAC, our major Western Boundary Current. We have used a multi-pronged approach by combining state of the art in situ observations and data assimilation modelling. Observations are obtained from a network of moorings, HF Radar and ocean gliders deployed in shelf waters along SE Australia, made possible through Australia's Integrated Marine Observing System (IMOS). In addition, we have developed a high resolution reanalysis of the East Australian Current using ROMS and 4DVar data Assimilation. In addition to the traditional data streams (SST, SSH and ARGO) we assimilate the newly available IMOS observations in the region. These include velocity and hydrographic observations from the EAC transport array, 1km HF radar measurements of surface currents, CTD casts from ocean gliders, and temperature, salinity and velocity measurements from a network of shelf mooring arrays. We use these vast data sets and numerical modelling tools combined with satellite remote sensed data to understand spatio-temporal variability of shelf processes and water mass distributions on synoptic, seasonal and inter-annual timescales. We have quantified the cross shelf transport variability inshore of the EAC, the driving mechanisms, the seasonal cycles in shelf waters and to some extent variability in the biological (phytoplankton) response. I will present a review of some of the key results from a number of recent studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puskar, Joseph David; Quintana, Michael A.; Sorensen, Neil Robert
A program is underway at Sandia National Laboratories to predict long-term reliability of photovoltaic (PV) systems. The vehicle for the reliability predictions is a Reliability Block Diagram (RBD), which models system behavior. Because this model is based mainly on field failure and repair times, it can be used to predict current reliability, but it cannot currently be used to accurately predict lifetime. In order to be truly predictive, physics-informed degradation processes and failure mechanisms need to be included in the model. This paper describes accelerated life testing of metal foil tapes used in thin-film PV modules, and how tape jointmore » degradation, a possible failure mode, can be incorporated into the model.« less
NASA Astrophysics Data System (ADS)
Walton, Karl; Blunt, Liam; Fleming, Leigh
2015-09-01
Mass finishing is amongst the most widely used finishing processes in modern manufacturing, in applications from deburring to edge radiusing and polishing. Processing objectives are varied, ranging from the cosmetic to the functionally critical. One such critical application is the hydraulically smooth polishing of aero engine component gas-washed surfaces. In this, and many other applications the drive to improve process control and finish tolerance is ever present. Considering its widespread use mass finishing has seen limited research activity, particularly with respect to surface characterization. The objectives of the current paper are to; characterise the mass finished stratified surface and its development process using areal surface parameters, provide guidance on the optimal parameters and sampling method to characterise this surface type for a given application, and detail the spatial variation in surface topography due to coupon edge shadowing. Blasted and peened square plate coupons in titanium alloy are wet (vibro) mass finished iteratively with increasing duration. Measurement fields are precisely relocated between iterations by fixturing and an image superimposition alignment technique. Surface topography development is detailed with ‘log of process duration’ plots of the ‘areal parameters for scale-limited stratified functional surfaces’, (the Sk family). Characteristic features of the Smr2 plot are seen to map out the processing of peak, core and dale regions in turn. These surface process regions also become apparent in the ‘log of process duration’ plot for Sq, where lower core and dale regions are well modelled by logarithmic functions. Surface finish (Ra or Sa) with mass finishing duration is currently predicted with an exponential model. This model is shown to be limited for the current surface type at a critical range of surface finishes. Statistical analysis provides a group of areal parameters including; Vvc, Sq, and Sdq, showing optimal discrimination for a specific range of surface finish outcomes. As a consequence of edge shadowing surface segregation is suggested for characterization purposes.
Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model.
Damij, Nadja; Boškoski, Pavle; Bohanec, Marko; Mileva Boshkoska, Biljana
2016-01-01
The omnipresent need for optimisation requires constant improvements of companies' business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and "what-if" scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results.
An adaptive ARX model to estimate the RUL of aluminum plates based on its crack growth
NASA Astrophysics Data System (ADS)
Barraza-Barraza, Diana; Tercero-Gómez, Víctor G.; Beruvides, Mario G.; Limón-Robles, Jorge
2017-01-01
A wide variety of Condition-Based Maintenance (CBM) techniques deal with the problem of predicting the time for an asset fault. Most statistical approaches rely on historical failure data that might not be available in several practical situations. To address this issue, practitioners might require the use of self-starting approaches that consider only the available knowledge about the current degradation process and the asset operating context to update the prognostic model. Some authors use Autoregressive (AR) models for this purpose that are adequate when the asset operating context is constant, however, if it is variable, the accuracy of the models can be affected. In this paper, three autoregressive models with exogenous variables (ARX) were constructed, and their capability to estimate the remaining useful life (RUL) of a process was evaluated following the case of the aluminum crack growth problem. An existing stochastic model of aluminum crack growth was implemented and used to assess RUL estimation performance of the proposed ARX models through extensive Monte Carlo simulations. Point and interval estimations were made based only on individual history, behavior, operating conditions and failure thresholds. Both analytic and bootstrapping techniques were used in the estimation process. Finally, by including recursive parameter estimation and a forgetting factor, the ARX methodology adapts to changing operating conditions and maintain the focus on the current degradation level of an asset.
NASA Technical Reports Server (NTRS)
Rosenfeld, David; Bahir, Gad
1992-01-01
This paper presents a theoretical model for the trap-assisted tunneling process in diffused n-on-p and implanted n(+)-on-p HgCdTe photodiodes. The model describes the connection between the leakage current associated with the traps and the trap characteristics: concentration, energy level, and capture cross sections. It is observed that the above two types of diodes differ the voltage dependence of the trap-assisted tunneling current and dynamic resistance. The model takes this difference into account and offers an explanation of the phenomenon. The good fit between measured and calculated dc characteristics of the photodiodes supports the validity of the model.
[Problems of work world and its impact on health. Current financial crisis].
Tomasina, Fernando
2012-06-01
Health and work are complex processes. Besides, they are multiple considering the forms they take. These two processes are linked to each other and they are influenced by each other. According to this, it is possible to establish that work world is extremely complex and heterogeneous. In this world, "old" or traditional risks coexist with "modern risks", derived from the new models of work organization and the incorporation of new technologies. Unemployment, work relationships precariousness and work risks outsourcing are results of neoliberal strategies. Some negative results of health-sickness process derived from transformation in work world and current global economic crisis have been noticed in current work conditions. Finally, the need for reconstructing policies focusing on this situation derived from work world is suggested.
The (Mathematical) Modeling Process in Biosciences.
Torres, Nestor V; Santos, Guido
2015-01-01
In this communication, we introduce a general framework and discussion on the role of models and the modeling process in the field of biosciences. The objective is to sum up the common procedures during the formalization and analysis of a biological problem from the perspective of Systems Biology, which approaches the study of biological systems as a whole. We begin by presenting the definitions of (biological) system and model. Particular attention is given to the meaning of mathematical model within the context of biology. Then, we present the process of modeling and analysis of biological systems. Three stages are described in detail: conceptualization of the biological system into a model, mathematical formalization of the previous conceptual model and optimization and system management derived from the analysis of the mathematical model. All along this work the main features and shortcomings of the process are analyzed and a set of rules that could help in the task of modeling any biological system are presented. Special regard is given to the formative requirements and the interdisciplinary nature of this approach. We conclude with some general considerations on the challenges that modeling is posing to current biology.
Linking Goal-Oriented Requirements and Model-Driven Development
NASA Astrophysics Data System (ADS)
Pastor, Oscar; Giachetti, Giovanni
In the context of Goal-Oriented Requirement Engineering (GORE) there are interesting modeling approaches for the analysis of complex scenarios that are oriented to obtain and represent the relevant requirements for the development of software products. However, the way to use these GORE models in an automated Model-Driven Development (MDD) process is not clear, and, in general terms, the translation of these models into the final software products is still manually performed. Therefore, in this chapter, we show an approach to automatically link GORE models and MDD processes, which has been elaborated by considering the experience obtained from linking the i * framework with an industrially applied MDD approach. The linking approach proposed is formulated by means of a generic process that is based on current modeling standards and technologies in order to facilitate its application for different MDD and GORE approaches. Special attention is paid to how this process generates appropriate model transformation mechanisms to automatically obtain MDD conceptual models from GORE models, and how it can be used to specify validation mechanisms to assure the correct model transformations.
Subic-Wrana, Claudia; Beutel, Manfred E; Garfield, David A S; Lane, Richard D
2011-04-01
The need to establish the efficacy of psychoanalytic long-term treatments has promoted efforts to operationalize psychic structure and structural change as key elements of psychoanalytic treatments and their outcomes. Current, promising measures of structural change, however, require extensive interviews and rater training. The purpose of this paper is to present the theory and measurement of Levels of Emotional Awareness (LEA) and to illustrate its use based on clinical case vignettes. The LEA model lays out a developmental trajectory of affective processing, akin to Piaget's theory of sensory-cognitive development, from implicit to explicit processing. Unlike other current assessments of psychic structure (Scales of Psychological Capacities, Reflective Functioning, Operationalized Psychodynamic Diagnostics) requiring intensive rater and interviewer training, it is easily assessed based on a self-report performance test. The LEA model conceptualizes a basic psychological capacity, affect processing. As we will illustrate using two case vignettes, by operationalizing implicit and explicit modes of affect processing, it provides a clinical measure of emotional awareness that is highly pertinent to the ongoing psychoanalytic debate on the nature and mechanisms of structural change. Copyright © 2011 Institute of Psychoanalysis.
Estimation of steady-state leakage current in polycrystalline PZT thin films
NASA Astrophysics Data System (ADS)
Podgorny, Yury; Vorotilov, Konstantin; Sigov, Alexander
2016-09-01
Estimation of the steady state (or "true") leakage current Js in polycrystalline ferroelectric PZT films with the use of the voltage-step technique is discussed. Curie-von Schweidler (CvS) and sum of exponents (Σ exp ) models are studied for current-time J (t) data fitting. Σ exp model (sum of three or two exponents) gives better fitting characteristics and provides good accuracy of Js estimation at reduced measurement time thus making possible to avoid film degradation, whereas CvS model is very sensitive to both start and finish time points and give in many cases incorrect results. The results give rise to suggest an existence of low-frequency relaxation processes in PZT films with characteristic duration of tens and hundreds of seconds.
Scientific Overview of Temporal Experiment for Storms and Tropical Systems (TEMPEST) Program
NASA Astrophysics Data System (ADS)
Chandra, C. V.; Reising, S. C.; Kummerow, C. D.; van den Heever, S. C.; Todd, G.; Padmanabhan, S.; Brown, S. T.; Lim, B.; Haddad, Z. S.; Koch, T.; Berg, G.; L'Ecuyer, T.; Munchak, S. J.; Luo, Z. J.; Boukabara, S. A.; Ruf, C. S.
2014-12-01
Over the past decade and a half, we have gained a better understanding of the role of clouds and precipitation on Earth's water cycle, energy budget and climate, from focused Earth science observational satellite missions. However, these missions provide only a snapshot at one point in time of the cloud's development. Processes that govern cloud system development occur primarily on time scales of the order of 5-30 minutes that are generally not observable from low Earth orbiting satellites. Geostationary satellites, in contrast, have higher temporal resolution but at present are limited to visible and infrared wavelengths that observe only the tops of clouds. This observing gap was noted by the National Research Council's Earth Science Decadal Survey in 2007. Uncertainties in global climate models are significantly affected by processes that govern the formation and dissipation of clouds that largely control the global water and energy budgets. Current uncertainties in cloud parameterization within climate models lead to drastically different climate outcomes. With all evidence suggesting that the precipitation onset may be governed by factors such atmospheric stability, it becomes critical to have at least first-order observations globally in diverse climate regimes. Similar arguments are valid for ice processes where more efficient ice formation and precipitation have a tendency to leave fewer ice clouds behind that have different but equally important impacts on the Earth's energy budget and resulting temperature trends. TEMPEST is a unique program that will provide a small constellation of inexpensive CubeSats with millimeter-wave radiometers to address key science needs related to cloud and precipitation processes. Because these processes are most critical in the development of climate models that will soon run at scales that explicitly resolve clouds, the TEMPEST program will directly focus on examining, validating and improving the parameterizations currently used in cloud scale models. The time evolution of cloud and precipitation microphysics is dependent upon parameterized process rates. The outcome of TEMPEST will provide a first-order understanding of how individual assumptions in current cloud model parameterizations behave in diverse climate regimes.
Graded effects in hierarchical figure-ground organization: reply to Peterson (1999).
Vecera, S P; O'Reilly, R C
2000-06-01
An important issue in vision research concerns the order of visual processing. S. P. Vecera and R. C. O'Reilly (1998) presented an interactive, hierarchical model that placed figure-ground segregation prior to object recognition. M. A. Peterson (1999) critiqued this model, arguing that because it used ambiguous stimulus displays, figure-ground processing did not precede object processing. In the current article, the authors respond to Peterson's (1999) interpretation of ambiguity in the model and her interpretation of what it means for figure-ground processing to come before object recognition. The authors argue that complete stimulus ambiguity is not critical to the model and that figure-ground precedes object recognition architecturally in the model. The arguments are supported with additional simulation results and an experiment, demonstrating that top-down inputs can influence figure-ground organization in displays that contain stimulus cues.
Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan
2015-11-01
Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.
Mobile agent location in distributed environments
NASA Astrophysics Data System (ADS)
Fountoukis, S. G.; Argyropoulos, I. P.
2012-12-01
An agent is a small program acting on behalf of a user or an application which plays the role of a user. Artificial intelligence can be encapsulated in agents so that they can be capable of both behaving autonomously and showing an elementary decision ability regarding movement and some specific actions. Therefore they are often called autonomous mobile agents. In a distributed system, they can move themselves from one processing node to another through the interconnecting network infrastructure. Their purpose is to collect useful information and to carry it back to their user. Also, agents are used to start, monitor and stop processes running on the individual interconnected processing nodes of computer cluster systems. An agent has a unique id to discriminate itself from other agents and a current position. The position can be expressed as the address of the processing node which currently hosts the agent. Very often, it is necessary for a user, a processing node or another agent to know the current position of an agent in a distributed system. Several procedures and algorithms have been proposed for the purpose of position location of mobile agents. The most basic of all employs a fixed computing node, which acts as agent position repository, receiving messages from all the moving agents and keeping records of their current positions. The fixed node, responds to position queries and informs users, other nodes and other agents about the position of an agent. Herein, a model is proposed that considers pairs and triples of agents instead of single ones. A location method, which is investigated in this paper, attempts to exploit this model.
Scanning SQUID Microscope and its Application in Detecting Weak Currents
NASA Astrophysics Data System (ADS)
Zhong, Chaorong; Li, Fei; Zhang, Fenghui; Ding, Hongsheng; Luo, Sheng; Lin, Dehua; He, Yusheng
A scanning SQUID microscope based on HTS dc SQUID has been developed. One of the applications of this microscope is to detect weak currents inside the sample. Considering that what being detected by the SQUID is the vertical component of the magnetic field on a plan where the SQUID lies, whereas the current which produces the magnetic field is actually located in a plan below the SQUID, a TWO PLAN model has been established. In this model Biot-Savart force laws and Fourier transformation were used to inverse the detected magnetic field into the underneath weak current. It has been shown that the distance between the current and the SQUID and the noise ratio of the experimental data have significant effects on the quality of the inverse process.
A THC Simulator for Modeling Fluid-Rock Interactions
NASA Astrophysics Data System (ADS)
Hamidi, Sahar; Galvan, Boris; Heinze, Thomas; Miller, Stephen
2014-05-01
Fluid-rock interactions play an essential role in many earth processes, from a likely influence on earthquake nucleation and aftershocks, to enhanced geothermal system, carbon capture and storage (CCS), and underground nuclear waste repositories. In THC models, two-way interactions between different processes (thermal, hydraulic and chemical) are present. Fluid flow influences the permeability of the rock especially if chemical reactions are taken into account. On one hand solute concentration influences fluid properties while, on the other hand, heat can affect further chemical reactions. Estimating heat production from a naturally fractured geothermal systems remains a complex problem. Previous works are typically based on a local thermal equilibrium assumption and rarely consider the salinity. The dissolved salt in fluid affects the hydro- and thermodynamical behavior of the system by changing the hydraulic properties of the circulating fluid. Coupled thermal-hydraulic-chemical models (THC) are important for investigating these processes, but what is needed is a coupling to mechanics to result in THMC models. Although similar models currently exist (e.g. PFLOTRAN), our objective here is to develop algorithms for implementation using the Graphics Processing Unit (GPU) computer architecture to be run on GPU clusters. To that aim, we present a two-dimensional numerical simulation of a fully coupled non-isothermal non-reactive solute flow. The thermal part of the simulation models heat transfer processes for either local thermal equilibrium or nonequilibrium cases, and coupled to a non-reactive mass transfer described by a non-linear diffusion/dispersion model. The flow process of the model includes a non-linear Darcian flow for either saturated or unsaturated scenarios. For the unsaturated case, we use the Richards' approximation for a mixture of liquid and gas phases. Relative permeability and capillary pressure are determined by the van Genuchten relations. Permeability of rock is controlled by porosity, which is itself related to effective stress. The theoretical model is solved using explicit finite differences, and runs in parallel mode with OpenMP. The code is fully modular so that any combination of current THC processes, one- and two-phase, can be chosen. Future developments will include dissolution and precipitation of chemical components in addition to chemical erosion.
Studying the Brain in a Dish: 3D Cell Culture Models of Human Brain Development and Disease.
Brown, Juliana; Quadrato, Giorgia; Arlotta, Paola
2018-01-01
The study of the cellular and molecular processes of the developing human brain has been hindered by access to suitable models of living human brain tissue. Recently developed 3D cell culture models offer the promise of studying fundamental brain processes in the context of human genetic background and species-specific developmental mechanisms. Here, we review the current state of 3D human brain organoid models and consider their potential to enable investigation of complex aspects of human brain development and the underpinning of human neurological disease. © 2018 Elsevier Inc. All rights reserved.
Neural-Net Processing of Characteristic Patterns From Electronic Holograms of Vibrating Blades
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
1999-01-01
Finite-element-model-trained artificial neural networks can be used to process efficiently the characteristic patterns or mode shapes from electronic holograms of vibrating blades. The models used for routine design may not yet be sufficiently accurate for this application. This document discusses the creation of characteristic patterns; compares model generated and experimental characteristic patterns; and discusses the neural networks that transform the characteristic patterns into strain or damage information. The current potential to adapt electronic holography to spin rigs, wind tunnels and engines provides an incentive to have accurate finite element models lor training neural networks.
NASA Astrophysics Data System (ADS)
Holzmann, Hubert; Massmann, Carolina
2015-04-01
A plenty of hydrological model types have been developed during the past decades. Most of them used a fixed design to describe the variable hydrological processes assuming to be representative for the whole range of spatial and temporal scales. This assumption is questionable as it is evident, that the runoff formation process is driven by dominant processes which can vary among different basins. Furthermore the model application and the interpretation of results is limited by data availability to identify the particular sub-processes, since most models were calibrated and validated only with discharge data. Therefore it can be hypothesized, that simpler model designs, focusing only on the dominant processes, can achieve comparable results with the benefit of less parameters. In the current contribution a modular model concept will be introduced, which allows the integration and neglection of hydrological sub-processes depending on the catchment characteristics and data availability. Key elements of the process modules refer to (1) storage effects (interception, soil), (2) transfer processes (routing), (3) threshold processes (percolation, saturation overland flow) and (4) split processes (rainfall excess). Based on hydro-meteorological observations in an experimental catchment in the Slovak region of the Carpathian mountains a comparison of several model realizations with different degrees of complexity will be discussed. A special focus is given on model parameter sensitivity estimated by Markov Chain Monte Carlo approach. Furthermore the identification of dominant processes by means of Sobol's method is introduced. It could be shown that a flexible model design - and even the simple concept - can reach comparable and equivalent performance than the standard model type (HBV-type). The main benefit of the modular concept is the individual adaptation of the model structure with respect to data and process availability and the option for parsimonious model design.
NASA Astrophysics Data System (ADS)
Hararuk, Oleksandra; Zwart, Jacob A.; Jones, Stuart E.; Prairie, Yves; Solomon, Christopher T.
2018-03-01
Formal integration of models and data to test hypotheses about the processes controlling carbon dynamics in lakes is rare, despite the importance of lakes in the carbon cycle. We built a suite of models (n = 102) representing different hypotheses about lake carbon processing, fit these models to data from a north-temperate lake using data assimilation, and identified which processes were essential for adequately describing the observations. The hypotheses that we tested concerned organic matter lability and its variability through time, temperature dependence of biological decay, photooxidation, microbial dynamics, and vertical transport of water via hypolimnetic entrainment and inflowing density currents. The data included epilimnetic and hypolimnetic CO2 and dissolved organic carbon, hydrologic fluxes, carbon loads, gross primary production, temperature, and light conditions at high frequency for one calibration and one validation year. The best models explained 76-81% and 64-67% of the variability in observed epilimnetic CO2 and dissolved organic carbon content in the validation data. Accurately describing C dynamics required accounting for hypolimnetic entrainment and inflowing density currents, in addition to accounting for biological transformations. In contrast, neither photooxidation nor variable organic matter lability improved model performance. The temperature dependence of biological decay (Q10) was estimated at 1.45, significantly lower than the commonly assumed Q10 of 2. By confronting multiple models of lake C dynamics with observations, we identified processes essential for describing C dynamics in a temperate lake at daily to annual scales, while also providing a methodological roadmap for using data assimilation to further improve understanding of lake C cycling.
A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning
NASA Astrophysics Data System (ADS)
Basdekas, L.; Stewart, N.; Triana, E.
2013-12-01
Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU evaluate tradeoffs in a continually changing world.
Queueing models for token and slotted ring networks. Thesis
NASA Technical Reports Server (NTRS)
Peden, Jeffery H.
1990-01-01
Currently the end-to-end delay characteristics of very high speed local area networks are not well understood. The transmission speed of computer networks is increasing, and local area networks especially are finding increasing use in real time systems. Ring networks operation is generally well understood for both token rings and slotted rings. There is, however, a severe lack of queueing models for high layer operation. There are several factors which contribute to the processing delay of a packet, as opposed to the transmission delay, e.g., packet priority, its length, the user load, the processor load, the use of priority preemption, the use of preemption at packet reception, the number of processors, the number of protocol processing layers, the speed of each processor, and queue length limitations. Currently existing medium access queueing models are extended by adding modeling techniques which will handle exhaustive limited service both with and without priority traffic, and modeling capabilities are extended into the upper layers of the OSI model. Some of the model are parameterized solution methods, since it is shown that certain models do not exist as parameterized solutions, but rather as solution methods.
Prospects for improving the representation of coastal and shelf seas in global ocean models
NASA Astrophysics Data System (ADS)
Holt, Jason; Hyder, Patrick; Ashworth, Mike; Harle, James; Hewitt, Helene T.; Liu, Hedong; New, Adrian L.; Pickles, Stephen; Porter, Andrew; Popova, Ekaterina; Icarus Allen, J.; Siddorn, John; Wood, Richard
2017-02-01
Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape.We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1/12°, and still reasonably well resolved at 1/4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1/12° global model resolves the first baroclinic Rossby radius for only ˜ 8 % of regions < 500 m deep, but this increases to ˜ 70 % for a 1/72° model, so resolving scales globally requires substantially finer resolution than the current state of the art.We quantify the benefit of improved resolution and process representation using 1/12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ɛ vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones.To explore the balance between the size of a globally refined model and that of multiscale modelling options (e.g. finite element, finite volume or a two-way nesting approach), we consider a simple scale analysis and a conceptual grid refining approach. We put this analysis in the context of evolving computer systems, discussing model turnaround time, scalability and resource costs. Using a simple cost model compared to a reference configuration (taken to be a 1/4° global model in 2011) and the increasing performance of the UK Research Councils' computer facility, we estimate an unstructured mesh multiscale approach, resolving process scales down to 1.5 km, would use a comparable share of the computer resource by 2021, the two-way nested multiscale approach by 2022, and a 1/72° global model by 2026. However, we also note that a 1/12° global model would not have a comparable computational cost to a 1° global model in 2017 until 2027. Hence, we conclude that for computationally expensive models (e.g. for oceanographic research or operational oceanography), resolving scales to ˜ 1.5 km would be routinely practical in about a decade given substantial effort on numerical and computational development. For complex Earth system models, this extends to about 2 decades, suggesting the focus here needs to be on improved process parameterisation to meet these challenges.
A Cognitive Model for Problem Solving in Computer Science
ERIC Educational Resources Information Center
Parham, Jennifer R.
2009-01-01
According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in…
Error characterization of microwave satellite soil moisture data sets using fourier analysis
USDA-ARS?s Scientific Manuscript database
Soil moisture is a key geophysical variable in hydrological and meteorological processes. Accurate and current observations of soil moisture over meso to global scales used as inputs to hydrological, weather and climate modelling will benefit the predictability and understanding of these processes. ...
Memory, Cognitive Processing, and the Process of "Listening": A Reply to Thomas and Levine.
ERIC Educational Resources Information Center
Bostrom, Robert N.
1996-01-01
Describes several "inaccurate" statements made in L. Thomas' and T. Levine's article in this journal (volume 21, page 103) regarding the current author's research and positions on the listening construct. Suggests that Thomas' and Levine's model has serious methodological flaws. (RS)
Jadi, Monika P; Behabadi, Bardia F; Poleg-Polsky, Alon; Schiller, Jackie; Mel, Bartlett W
2014-05-01
In pursuit of the goal to understand and eventually reproduce the diverse functions of the brain, a key challenge lies in reverse engineering the peculiar biology-based "technology" that underlies the brain's remarkable ability to process and store information. The basic building block of the nervous system is the nerve cell, or "neuron," yet after more than 100 years of neurophysiological study and 60 years of modeling, the information processing functions of individual neurons, and the parameters that allow them to engage in so many different types of computation (sensory, motor, mnemonic, executive, etc.) remain poorly understood. In this paper, we review both historical and recent findings that have led to our current understanding of the analog spatial processing capabilities of dendrites, the major input structures of neurons, with a focus on the principal cell type of the neocortex and hippocampus, the pyramidal neuron (PN). We encapsulate our current understanding of PN dendritic integration in an abstract layered model whose spatially sensitive branch-subunits compute multidimensional sigmoidal functions. Unlike the 1-D sigmoids found in conventional neural network models, multidimensional sigmoids allow the cell to implement a rich spectrum of nonlinear modulation effects directly within their dendritic trees.
NASA Astrophysics Data System (ADS)
Heimann, M.; Prentice, I. C.; Foley, J.; Hickler, T.; Kicklighter, D. W.; McGuire, A. D.; Melillo, J. M.; Ramankutty, N.; Sitch, S.
2001-12-01
Models of biophysical and biogeochemical proceses are being used -either offline or in coupled climate-carbon cycle (C4) models-to assess climate- and CO2-induced feedbacks on atmospheric CO2. Observations of atmospheric CO2 concentration, and supplementary tracers including O2 concentrations and isotopes, offer unique opportunities to evaluate the large-scale behaviour of models. Global patterns, temporal trends, and interannual variability of the atmospheric CO2 concentration and its seasonal cycle provide crucial benchmarks for simulations of regionally-integrated net ecosystem exchange; flux measurements by eddy correlation allow a far more demanding model test at the ecosystem scale than conventional indicators, such as measurements of annual net primary production; and large-scale manipulations, such as the Duke Forest Free Air Carbon Enrichment (FACE) experiment, give a standard to evaluate modelled phenomena such as ecosystem-level CO2 fertilization. Model runs including historical changes of CO2, climate and land use allow comparison with regional-scale monthly CO2 balances as inferred from atmospheric measurements. Such comparisons are providing grounds for some confidence in current models, while pointing to processes that may still be inadequately treated. Current plans focus on (1) continued benchmarking of land process models against flux measurements across ecosystems and experimental findings on the ecosystem-level effects of enhanced CO2, reactive N inputs and temperature; (2) improved representation of land use, forest management and crop metabolism in models; and (3) a strategy for the evaluation of C4 models in a historical observational context.
A KPI framework for process-based benchmarking of hospital information systems.
Jahn, Franziska; Winter, Alfred
2011-01-01
Benchmarking is a major topic for monitoring, directing and elucidating the performance of hospital information systems (HIS). Current approaches neglect the outcome of the processes that are supported by the HIS and their contribution to the hospital's strategic goals. We suggest to benchmark HIS based on clinical documentation processes and their outcome. A framework consisting of a general process model and outcome criteria for clinical documentation processes is introduced.
Lattice Gauge Theories Within and Beyond the Standard Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelzer, Zechariah John
The Standard Model of particle physics has been very successful in describing fundamental interactions up to the highest energies currently probed in particle accelerator experiments. However, the Standard Model is incomplete and currently exhibits tension with experimental data for interactions involvingmore » $B$~mesons. Consequently, $B$-meson physics is of great interest to both experimentalists and theorists. Experimentalists worldwide are studying the decay and mixing processes of $B$~mesons in particle accelerators. Theorists are working to understand the data by employing lattice gauge theories within and beyond the Standard Model. This work addresses the theoretical effort and is divided into two main parts. In the first part, I present a lattice-QCD calculation of form factors for exclusive semileptonic decays of $B$~mesons that are mediated by both charged currents ($$B \\to \\pi \\ell \
Trends in Mediation Analysis in Nursing Research: Improving Current Practice.
Hertzog, Melody
2018-06-01
The purpose of this study was to describe common approaches used by nursing researchers to test mediation models and evaluate them within the context of current methodological advances. MEDLINE was used to locate studies testing a mediation model and published from 2004 to 2015 in nursing journals. Design (experimental/correlation, cross-sectional/longitudinal, model complexity) and analysis (method, inclusion of test of mediated effect, violations/discussion of assumptions, sample size/power) characteristics were coded for 456 studies. General trends were identified using descriptive statistics. Consistent with findings of reviews in other disciplines, evidence was found that nursing researchers may not be aware of the strong assumptions and serious limitations of their analyses. Suggestions for strengthening the rigor of such studies and an overview of current methods for testing more complex models, including longitudinal mediation processes, are presented.
Predictive and Prognostic Models: Implications for Healthcare Decision-Making in a Modern Recession
Vogenberg, F. Randy
2009-01-01
Various modeling tools have been developed to address the lack of standardized processes that incorporate the perspectives of all healthcare stakeholders. Such models can assist in the decision-making process aimed at achieving specific clinical outcomes, as well as guide the allocation of healthcare resources and reduce costs. The current efforts in Congress to change the way healthcare is financed, reimbursed, and delivered have rendered the incorporation of modeling tools into the clinical decision-making all the more important. Prognostic and predictive models are particularly relevant to healthcare, particularly in the clinical decision-making, with implications for payers, patients, and providers. The use of these models is likely to increase, as providers and patients seek to improve their clinical decision process to achieve better outcomes, while reducing overall healthcare costs. PMID:25126292
NASA Astrophysics Data System (ADS)
Ding, Y.; Yu, J.; Bao, X.; Yao, Z.
2016-02-01
The characteristics and dynamical mechanism of summer-time coastal current over the northwestern South China Sea (NSCS) shelf have been investigated based on a high resolution unstructured-grid finite volume community ocean model (FVCOM). Model-data comparison demonstrates that model well resolves the coastal dynamics over the NSCS shelf. The coastal current on the NSCS shelf is intensively influenced by monsoon and freshwater discharge of the Pearl River. Strong southwesterly wind drive the coastal current northeastward. However, under weak southwest monsoon, the coastal current west of Pearl River estuary (PRE) advects toward southwest, and splits into two parts when reaching east of the Qiongzhou Strait, with one branch entering the Gulf of Tonkin through the Qiongzhou Strait, transporting low salinity water into the Gulf of Tonkin, and the other part flows cyclonic and interacts with the northeastward current around southeast of Hainan Island, forming a cyclonic eddy east of the Qiongzhou Strait. A variety of model experiments focused on freshwater discharge, wind forcing, tidal rectification, and stratification are performed to study the physical mechanism of the southwestward coastal current which is usually against the summer wind. Process-oriented experiment results indicate that the southwest monsoon and freshwater discharge are important factors influencing the formation of southwestward coastal current during summer. Momentum balance analysis suggests that the along shelf barotropic pressure gradient due to the Pearl River discharge and wind forcing provides the main driving force for the southwestward coastal current.
Preface: Current perspectives in modelling, monitoring, and predicting geophysical fluid dynamics
NASA Astrophysics Data System (ADS)
Mancho, Ana M.; Hernández-García, Emilio; López, Cristóbal; Turiel, Antonio; Wiggins, Stephen; Pérez-Muñuzuri, Vicente
2018-02-01
The third edition of the international workshop Nonlinear Processes in Oceanic and Atmospheric Flows
was held at the Institute of Mathematical Sciences (ICMAT) in Madrid from 6 to 8 July 2016. The event gathered oceanographers, atmospheric scientists, physicists, and applied mathematicians sharing a common interest in the nonlinear dynamics of geophysical fluid flows. The philosophy of this meeting was to bring together researchers from a variety of backgrounds into an environment that favoured a vigorous discussion of concepts across different disciplines. The present Special Issue on Current perspectives in modelling, monitoring, and predicting geophysical fluid dynamics
contains selected contributions, mainly from attendants of the workshop, providing an updated perspective on modelling aspects of geophysical flows as well as issues on prediction and assimilation of observational data and novel tools for describing transport and mixing processes in these contexts. More details on these aspects are discussed in this preface.
Autophagy in Drosophila: From Historical Studies to Current Knowledge
Mulakkal, Nitha C.; Nagy, Peter; Takats, Szabolcs; Tusco, Radu; Juhász, Gábor; Nezis, Ioannis P.
2014-01-01
The discovery of evolutionarily conserved Atg genes required for autophagy in yeast truly revolutionized this research field and made it possible to carry out functional studies on model organisms. Insects including Drosophila are classical and still popular models to study autophagy, starting from the 1960s. This review aims to summarize past achievements and our current knowledge about the role and regulation of autophagy in Drosophila, with an outlook to yeast and mammals. The basic mechanisms of autophagy in fruit fly cells appear to be quite similar to other eukaryotes, and the role that this lysosomal self-degradation process plays in Drosophila models of various diseases already made it possible to recognize certain aspects of human pathologies. Future studies in this complete animal hold great promise for the better understanding of such processes and may also help finding new research avenues for the treatment of disorders with misregulated autophagy. PMID:24949430
Applications of the International Space Station Probabilistic Risk Assessment Model
NASA Technical Reports Server (NTRS)
Grant, Warren; Lutomski, Michael G.
2011-01-01
Recently the International Space Station (ISS) has incorporated more Probabilistic Risk Assessments (PRAs) in the decision making process for significant issues. Future PRAs will have major impact to ISS and future spacecraft development and operations. These PRAs will have their foundation in the current complete ISS PRA model and the current PRA trade studies that are being analyzed as requested by ISS Program stakeholders. ISS PRAs have recently helped in the decision making process for determining reliability requirements for future NASA spacecraft and commercial spacecraft, making crew rescue decisions, as well as making operational requirements for ISS orbital orientation, planning Extravehicular activities (EVAs) and robotic operations. This paper will describe some applications of the ISS PRA model and how they impacted the final decision. This paper will discuss future analysis topics such as life extension, requirements of new commercial vehicles visiting ISS.
Wenchi Jin; Hong S. He; Frank R. Thompson; Wen J. Wang; Jacob S. Fraser; Stephen R. Shifley; Brice B. Hanberry; William D. Dijak
2017-01-01
The Central Hardwood Forest (CHF) in the United States is currently a major carbon sink, there are uncertainties in how long the current carbon sink will persist and if the CHF will eventually become a carbon source. We used a multi-model ensemble to investigate aboveground carbon density of the CHF from 2010 to 2300 under current climate. Simulations were done using...
A review of natural lightning - Experimental data and modeling
NASA Technical Reports Server (NTRS)
Uman, M. A.; Krider, E. P.
1982-01-01
A critical review is presented of the currents and the electric and magnetic fields characteristic of each of the salient discharge processes which make up cloud-to-ground and intracloud lightning. Emphasis is placed on the more recent work in which measured waveform variation is in the microsecond and submicrosecond range, since it is this time-scale that is of primary importance in lightning/aircraft interactions. The state-of-the-art of the modeling of lightning currents and fields is discussed in detail. A comprehensive bibliography is given of all literature relating to both lightning measurements and models.
Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger
NASA Astrophysics Data System (ADS)
Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun
2011-04-01
This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.
Chemical kinetic and photochemical data for use in stratospheric modelling
NASA Technical Reports Server (NTRS)
Demore, W. B.; Stief, L. J.; Kaufman, F.; Golden, D. M.; Hampton, R. F.; Kurylo, M. J.; Margitan, J. J.; Molina, M. J.; Watson, R. T.
1979-01-01
An evaluated set of rate constants and photochemical cross sections were compiled for use in modelling stratospheric processes. The data are primarily relevant to the ozone layer, and its possible perturbation by anthropogenic activities. The evaluation is current to, approximately, January, 1979.
AQMEII: A New International Initiative on Air Quality Model Evaluation
We provide a conceptual view of the process of evaluating regional-scale three-dimensional numerical photochemical air quality modeling system, based on an examination of existing approached to the evaluation of such systems as they are currently used in a variety of application....
EFFECTS OF CLIMATE CHANGE ON WEATHER AND WATER
Information regarding weather and hydrological processes and how they may change in the future is available from a variety of dynamically downscaled climate models. Current studies are helping to improve the use of such models for regional climate impact studies by testing the s...
Challenges in soil erosion research and prediction model development
USDA-ARS?s Scientific Manuscript database
Quantification of soil erosion has been traditionally considered as a surface hydrologic process with equations for soil detachment and sediment transport derived from the mechanics and hydraulics of the rainfall and surface flow. Under the current erosion modeling framework, the soil has a constant...
Modernizing Selection and Promotion Procedures in the State Employment Security Service Agency.
ERIC Educational Resources Information Center
Derryck, Dennis A.; Leyes, Richard
The purpose of this feasibility study was to discover the types ofselection and promotion models, strategies, and processes that must be employed if current State Employment Security Service Agency selection practices are to be made more directly relevant to the various populations currently being served. Specifically, the study sought to…
ERIC Educational Resources Information Center
Bye, Jayne
Current research into youth transitions in Australia documents an increasingly individualized process in which significant numbers of youths are deemed at risk of not making a successful transition from school to work. Many theorists are questioning the applicability of the linear model of transition to current conditions. Other theorists are…
Current and Future Effects of Mexican Immigration in California. Executive Summary. R-3365/1-CR.
ERIC Educational Resources Information Center
McCarthy, Kevin F.; Valdez, R. Burciaga
This study to assess the current situation of Mexican immigrants in California and project future possibilities constructs a demographic profile of the immigrants, examines their economic effects on the state, and describes their socioeconomic integration into California society. Models of immigration/integration processes are developed and used…
Approximation of epidemic models by diffusion processes and their statistical inference.
Guy, Romain; Larédo, Catherine; Vergu, Elisabeta
2015-02-01
Multidimensional continuous-time Markov jump processes [Formula: see text] on [Formula: see text] form a usual set-up for modeling [Formula: see text]-like epidemics. However, when facing incomplete epidemic data, inference based on [Formula: see text] is not easy to be achieved. Here, we start building a new framework for the estimation of key parameters of epidemic models based on statistics of diffusion processes approximating [Formula: see text]. First, previous results on the approximation of density-dependent [Formula: see text]-like models by diffusion processes with small diffusion coefficient [Formula: see text], where [Formula: see text] is the population size, are generalized to non-autonomous systems. Second, our previous inference results on discretely observed diffusion processes with small diffusion coefficient are extended to time-dependent diffusions. Consistent and asymptotically Gaussian estimates are obtained for a fixed number [Formula: see text] of observations, which corresponds to the epidemic context, and for [Formula: see text]. A correction term, which yields better estimates non asymptotically, is also included. Finally, performances and robustness of our estimators with respect to various parameters such as [Formula: see text] (the basic reproduction number), [Formula: see text], [Formula: see text] are investigated on simulations. Two models, [Formula: see text] and [Formula: see text], corresponding to single and recurrent outbreaks, respectively, are used to simulate data. The findings indicate that our estimators have good asymptotic properties and behave noticeably well for realistic numbers of observations and population sizes. This study lays the foundations of a generic inference method currently under extension to incompletely observed epidemic data. Indeed, contrary to the majority of current inference techniques for partially observed processes, which necessitates computer intensive simulations, our method being mostly an analytical approach requires only the classical optimization steps.
Li, Yue-Song; Chen, Xin-Jun; Yang, Hong
2012-06-01
By adopting FVCOM-simulated 3-D physical field and based on the biological processes of chub mackerel (Scomber japonicas) in its early life history from the individual-based biological model, the individual-based ecological model for S. japonicas at its early growth stages in the East China Sea was constructed through coupling the physical field in March-July with the biological model by the method of Lagrange particle tracking. The model constructed could well simulate the transport process and abundance distribution of S. japonicas eggs and larvae. The Taiwan Warm Current, Kuroshio, and Tsushima Strait Warm Current directly affected the transport process and distribution of the eggs and larvae, and indirectly affected the growth and survive of the eggs and larvae through the transport to the nursery grounds with different water temperature and foods. The spawning grounds in southern East China Sea made more contributions to the recruitment to the fishing grounds in northeast East China Sea, but less to the Yangtze estuary and Zhoushan Island. The northwestern and southwestern parts of spawning grounds had strong connectivity with the nursery grounds of Cheju and Tsushima Straits, whereas the northeastern and southeastern parts of the spawning ground had strong connectivity with the nursery grounds of Kyushu and Pacific Ocean.
Trapé, Thiago Lavras; Campos, Rosana Onocko
2017-01-01
ABSTRACT OBJECTIVE This study aims to analyze the current status of the mental health care model of the Brazilian Unified Health System, according to its funding, governance processes, and mechanisms of assessment. METHODS We have carried out a documentary analysis of the ordinances, technical reports, conference reports, normative resolutions, and decrees from 2009 to 2014. RESULTS This is a time of consolidation of the psychosocial model, with expansion of the health care network and inversion of the funding for community services with a strong emphasis on the area of crack cocaine and other drugs. Mental health is an underfunded area within the chronically underfunded Brazilian Unified Health System. The governance model constrains the progress of essential services, which creates the need for the incorporation of a process of regionalization of the management. The mechanisms of assessment are not incorporated into the health policy in the bureaucratic field. CONCLUSIONS There is a need to expand the global funding of the area of health, specifically mental health, which has been shown to be a successful policy. The current focus of the policy seems to be archaic in relation to the precepts of the psychosocial model. Mechanisms of assessment need to be expanded. PMID:28355335
BRAIN MYELINATION IN PREVALENT NEUROPSYCHIATRIC DEVELOPMENTAL DISORDERS
BARTZOKIS, GEORGE
2008-01-01
Current concepts of addiction focus on neuronal neurocircuitry and neurotransmitters and are largely based on animal model data, but the human brain is unique in its high myelin content and extended developmental (myelination) phase that continues until middle age. The biology of our exceptional myelination process and factors that influence it have been synthesized into a recently published myelin model of human brain evolution and normal development that cuts across the current symptom-based classification of neuropsychiatric disorders. The developmental perspective of the model suggests that dysregulations in the myelination process contribute to prevalent early-life neuropsychiatric disorders, as well as to addictions. These disorders share deficits in inhibitory control functions that likely contribute to their high rates of comorbidity with addiction and other impulsive behaviors. The model posits that substances such as alcohol and psychostimulants are toxic to the extremely vulnerable myelination process and contribute to the poor outcomes of primary and comorbid addictive disorders in susceptible individuals. By increasing the scientific focus on myelination, the model provides a rational biological framework for the development of novel, myelin-centered treatments that may have widespread efficacy across multiple disease states and could potentially be used in treating, delaying, or even preventing some of the most prevalent and devastating neuropsychiatric disorders. PMID:18668184
Barnett, Tony; Fournié, Guillaume; Gupta, Sunetra; Seeley, Janet
2015-01-01
Incorporation of 'social' variables into epidemiological models remains a challenge. Too much detail and models cease to be useful; too little and the very notion of infection - a highly social process in human populations - may be considered with little reference to the social. The French sociologist Émile Durkheim proposed that the scientific study of society required identification and study of 'social currents'. Such 'currents' are what we might today describe as 'emergent properties', specifiable variables appertaining to individuals and groups, which represent the perspectives of social actors as they experience the environment in which they live their lives. Here we review the ways in which one particular emergent property, hope, relevant to a range of epidemiological situations, might be used in epidemiological modelling of infectious diseases in human populations. We also indicate how such an approach might be extended to include a range of other potential emergent properties to represent complex social and economic processes bearing on infectious disease transmission.
1992 NASA Life Support Systems Analysis workshop
NASA Technical Reports Server (NTRS)
Evanich, Peggy L.; Crabb, Thomas M.; Gartrell, Charles F.
1992-01-01
The 1992 Life Support Systems Analysis Workshop was sponsored by NASA's Office of Aeronautics and Space Technology (OAST) to integrate the inputs from, disseminate information to, and foster communication among NASA, industry, and academic specialists. The workshop continued discussion and definition of key issues identified in the 1991 workshop, including: (1) modeling and experimental validation; (2) definition of systems analysis evaluation criteria; (3) integration of modeling at multiple levels; and (4) assessment of process control modeling approaches. Through both the 1991 and 1992 workshops, NASA has continued to seek input from industry and university chemical process modeling and analysis experts, and to introduce and apply new systems analysis approaches to life support systems. The workshop included technical presentations, discussions, and interactive planning, with sufficient time allocated for discussion of both technology status and technology development recommendations. Key personnel currently involved with life support technology developments from NASA, industry, and academia provided input to the status and priorities of current and future systems analysis methods and requirements.
NASA Astrophysics Data System (ADS)
Lee, Jong-Chul; Lee, Won-Ho; Kim, Woun-Jea
2015-09-01
The design and development procedures of SF6 gas circuit breakers are still largely based on trial and error through testing although the development costs go higher every year. The computation cannot cover the testing satisfactorily because all the real processes arc not taken into account. But the knowledge of the arc behavior and the prediction of the thermal-flow inside the interrupters by numerical simulations are more useful than those by experiments due to the difficulties to obtain physical quantities experimentally and the reduction of computational costs in recent years. In this paper, in order to get further information into the interruption process of a SF6 self-blast interrupter, which is based on a combination of thermal expansion and the arc rotation principle, gas flow simulations with a CFD-arc modeling are performed during the whole switching process such as high-current period, pre-current zero period, and current-zero period. Through the complete work, the pressure-rise and the ramp of the pressure inside the chamber before current zero as well as the post-arc current after current zero should be a good criterion to predict the short-line fault interruption performance of interrupters.
Exploring the Processes of Generating LOD (0-2) Citygml Models in Greater Municipality of Istanbul
NASA Astrophysics Data System (ADS)
Buyuksalih, I.; Isikdag, U.; Zlatanova, S.
2013-08-01
3D models of cities, visualised and exploded in 3D virtual environments have been available for several years. Currently a large number of impressive realistic 3D models have been regularly presented at scientific, professional and commercial events. One of the most promising developments is OGC standard CityGML. CityGML is object-oriented model that support 3D geometry and thematic semantics, attributes and relationships, and offers advanced options for realistic visualization. One of the very attractive characteristics of the model is the support of 5 levels of detail (LOD), starting from 2.5D less accurate model (LOD0) and ending with very detail indoor model (LOD4). Different local government offices and municipalities have different needs when utilizing the CityGML models, and the process of model generation depends on local and domain specific needs. Although the processes (i.e. the tasks and activities) for generating the models differs depending on its utilization purpose, there are also some common tasks (i.e. common denominator processes) in the model generation of City GML models. This paper focuses on defining the common tasks in generation of LOD (0-2) City GML models and representing them in a formal way with process modeling diagrams.
Ghany, Ahmad; Vassanji, Karim; Kuziemsky, Craig; Keshavjee, Karim
2013-01-01
Electronic prescribing (e-prescribing) is expected to bring many benefits to Canadian healthcare, such as a reduction in errors and adverse drug reactions. As there currently is no functioning e-prescribing system in Canada that is completely electronic, we are unable to evaluate the performance of a live system. An alternative approach is to use simulation modeling for evaluation. We developed two discrete-event simulation models, one of the current handwritten prescribing system and one of a proposed e-prescribing system, to compare the performance of these two systems. We were able to compare the number of processes in each model, workflow efficiency, and the distribution of patients or prescriptions. Although we were able to compare these models to each other, using discrete-event simulation software was challenging. We were limited in the number of variables we could measure. We discovered non-linear processes and feedback loops in both models that could not be adequately represented using discrete-event simulation software. Finally, interactions between entities in both models could not be modeled using this type of software. We have come to the conclusion that a more appropriate approach to modeling both the handwritten and electronic prescribing systems would be to use a complex adaptive systems approach using agent-based modeling or systems-based modeling.
The Climate Variability & Predictability (CVP) Program at NOAA - Recent Program Advancements
NASA Astrophysics Data System (ADS)
Lucas, S. E.; Todd, J. F.
2015-12-01
The Climate Variability & Predictability (CVP) Program supports research aimed at providing process-level understanding of the climate system through observation, modeling, analysis, and field studies. This vital knowledge is needed to improve climate models and predictions so that scientists can better anticipate the impacts of future climate variability and change. To achieve its mission, the CVP Program supports research carried out at NOAA and other federal laboratories, NOAA Cooperative Institutes, and academic institutions. The Program also coordinates its sponsored projects with major national and international scientific bodies including the World Climate Research Programme (WCRP), the International and U.S. Climate Variability and Predictability (CLIVAR/US CLIVAR) Program, and the U.S. Global Change Research Program (USGCRP). The CVP program sits within NOAA's Climate Program Office (http://cpo.noaa.gov/CVP). The CVP Program currently supports multiple projects in areas that are aimed at improved representation of physical processes in global models. Some of the topics that are currently funded include: i) Improved Understanding of Intraseasonal Tropical Variability - DYNAMO field campaign and post -field projects, and the new climate model improvement teams focused on MJO processes; ii) Climate Process Teams (CPTs, co-funded with NSF) with projects focused on Cloud macrophysical parameterization and its application to aerosol indirect effects, and Internal-Wave Driven Mixing in Global Ocean Models; iii) Improved Understanding of Tropical Pacific Processes, Biases, and Climatology; iv) Understanding Arctic Sea Ice Mechanism and Predictability;v) AMOC Mechanisms and Decadal Predictability Recent results from CVP-funded projects will be summarized. Additional information can be found at http://cpo.noaa.gov/CVP.
Microscopic models for bridging electrostatics and currents
NASA Astrophysics Data System (ADS)
Borghi, L.; DeAmbrosis, A.; Mascheretti, P.
2007-03-01
A teaching sequence based on the use of microscopic models to link electrostatic phenomena with direct currents is presented. The sequence, devised for high school students, was designed after initial work carried out with student teachers attending a school of specialization for teaching physics at high school, at the University of Pavia. The results obtained with them are briefly presented, because they directed our steps for the development of the teaching sequence. For both the design of the experiments and their interpretation, we drew inspiration from the original works of Alessandro Volta; in addition, a structural model based on the particular role of electrons as elementary charges both in electrostatic phenomena and in currents was proposed. The teaching sequence starts from experiments on charging objects by rubbing and by induction, and engages students in constructing microscopic models to interpret their observations. By using these models and by closely examining the ideas of tension and capacitance, the students acknowledge that a charging (or discharging) process is due to the motion of electrons that, albeit for short time intervals, represent a current. Finally, they are made to see that the same happens in transients of direct current circuits.
Liu, Hengyuan; Chen, Nan; Feng, Chuanping; Tong, Shuang; Li, Rui
2017-05-01
This study aimed to investigate the effect of electro-stimulation on denitrifying bacterial growth in a bio-electrochemical reactor, and the growth were modeled using modified Gompertz model under different current densities at three C/Ns. It was found that the similar optimum current density of 250mA/m 2 was obtained at C/N=0.75, 1.00 and 1.25, correspondingly the maximum nitrate removal efficiencies were 98.0%, 99.2% and 99.9%. Moreover, ATP content and cell membrane permeability of denitrifying bacteria were significantly increased at optimum current density. Furthermore, modified Gompertz model fitted well with the microbial growth curves, and the highest maximum growth rates (µ max ) and shorter lag time were obtained at the optimum current density for all C/Ns. This study demonstrated that the modified Gompertz model could be used for describing microbial growth under different current densities and C/Ns in a bio-electrochemical denitrification reactor, and it provided an alternative for improving the performance of denitrification process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Low cost solar silicon production
NASA Astrophysics Data System (ADS)
Mede, Matt
2009-08-01
The worldwide demand for solar grade silicon reached an all time high between 2007 and 2008. Although growth in the solar industry is slowing due to the current economic downturn, demand is expected to rebound in 2011 based on current cost models. However, demand will increase even more than currently anticipated if costs are reduced. This situation creates an opportunity for new and innovative approaches to the production of photovoltaic grade silicon, especially methods which can demonstrate cost reductions over currently utilized processes.
An empirically based model for knowledge management in health care organizations.
Sibbald, Shannon L; Wathen, C Nadine; Kothari, Anita
2016-01-01
Knowledge management (KM) encompasses strategies, processes, and practices that allow an organization to capture, share, store, access, and use knowledge. Ideal KM combines different sources of knowledge to support innovation and improve performance. Despite the importance of KM in health care organizations (HCOs), there has been very little empirical research to describe KM in this context. This study explores KM in HCOs, focusing on the status of current intraorganizational KM. The intention is to provide insight for future studies and model development for effective KM implementation in HCOs. A qualitative methods approach was used to create an empirically based model of KM in HCOs. Methods included (a) qualitative interviews (n = 24) with senior leadership to identify types of knowledge important in these roles plus current information-seeking behaviors/needs and (b) in-depth case study with leaders in new executive positions (n = 2). The data were collected from 10 HCOs. Our empirically based model for KM was assessed for face and content validity. The findings highlight the paucity of formal KM in our sample HCOs. Organizational culture, leadership, and resources are instrumental in supporting KM processes. An executive's knowledge needs are extensive, but knowledge assets are often limited or difficult to acquire as much of the available information is not in a usable format. We propose an empirically based model for KM to highlight the importance of context (internal and external), and knowledge seeking, synthesis, sharing, and organization. Participants who reviewed the model supported its basic components and processes, and potential for incorporating KM into organizational processes. Our results articulate ways to improve KM, increase organizational learning, and support evidence-informed decision-making. This research has implications for how to better integrate evidence and knowledge into organizations while considering context and the role of organizational processes.
Particle transport model sensitivity on wave-induced processes
NASA Astrophysics Data System (ADS)
Staneva, Joanna; Ricker, Marcel; Krüger, Oliver; Breivik, Oyvind; Stanev, Emil; Schrum, Corinna
2017-04-01
Different effects of wind waves on the hydrodynamics in the North Sea are investigated using a coupled wave (WAM) and circulation (NEMO) model system. The terms accounting for the wave-current interaction are: the Stokes-Coriolis force, the sea-state dependent momentum and energy flux. The role of the different Stokes drift parameterizations is investigated using a particle-drift model. Those particles can be considered as simple representations of either oil fractions, or fish larvae. In the ocean circulation models the momentum flux from the atmosphere, which is related to the wind speed, is passed directly to the ocean and this is controlled by the drag coefficient. However, in the real ocean, the waves play also the role of a reservoir for momentum and energy because different amounts of the momentum flux from the atmosphere is taken up by the waves. In the coupled model system the momentum transferred into the ocean model is estimated as the fraction of the total flux that goes directly to the currents plus the momentum lost from wave dissipation. Additionally, we demonstrate that the wave-induced Stokes-Coriolis force leads to a deflection of the current. During the extreme events the Stokes velocity is comparable in magnitude to the current velocity. The resulting wave-induced drift is crucial for the transport of particles in the upper ocean. The performed sensitivity analyses demonstrate that the model skill depends on the chosen processes. The results are validated using surface drifters, ADCP, HF radar data and other in-situ measurements in different regions of the North Sea with a focus on the coastal areas. The using of a coupled model system reveals that the newly introduced wave effects are important for the drift-model performance, especially during extremes. Those effects cannot be neglected by search and rescue, oil-spill, transport of biological material, or larva drift modelling.
NASA Astrophysics Data System (ADS)
Saylor, Rick D.; Hicks, Bruce B.
2016-03-01
Just as the exchange of heat, moisture and momentum between the Earth's surface and the atmosphere are critical components of meteorological and climate models, the surface-atmosphere exchange of many trace gases and aerosol particles is a vitally important process in air quality (AQ) models. Current state-of-the-art AQ models treat the emission and deposition of most gases and particles as separate model parameterizations, even though evidence has accumulated over time that the emission and deposition processes of many constituents are often two sides of the same coin, with the upward (emission) or downward (deposition) flux over a landscape depending on a range of environmental, seasonal and biological variables. In this note we argue that the time has come to integrate the treatment of these processes in AQ models to provide biological, physical and chemical consistency and improved predictions of trace gases and particles.
A Single-Boundary Accumulator Model of Response Times in an Addition Verification Task
Faulkenberry, Thomas J.
2017-01-01
Current theories of mathematical cognition offer competing accounts of the interplay between encoding and calculation in mental arithmetic. Additive models propose that manipulations of problem format do not interact with the cognitive processes used in calculation. Alternatively, interactive models suppose that format manipulations have a direct effect on calculation processes. In the present study, we tested these competing models by fitting participants' RT distributions in an arithmetic verification task with a single-boundary accumulator model (the shifted Wald distribution). We found that in addition to providing a more complete description of RT distributions, the accumulator model afforded a potentially more sensitive test of format effects. Specifically, we found that format affected drift rate, which implies that problem format has a direct impact on calculation processes. These data give further support for an interactive model of mental arithmetic. PMID:28769853
A Bayesian hierarchical diffusion model decomposition of performance in Approach–Avoidance Tasks
Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan
2015-01-01
Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest. PMID:25491372
A feedback model of visual attention.
Spratling, M W; Johnson, M H
2004-03-01
Feedback connections are a prominent feature of cortical anatomy and are likely to have a significant functional role in neural information processing. We present a neural network model of cortical feedback that successfully simulates neurophysiological data associated with attention. In this domain, our model can be considered a more detailed, and biologically plausible, implementation of the biased competition model of attention. However, our model is more general as it can also explain a variety of other top-down processes in vision, such as figure/ground segmentation and contextual cueing. This model thus suggests that a common mechanism, involving cortical feedback pathways, is responsible for a range of phenomena and provides a unified account of currently disparate areas of research.
Hybrid codes with finite electron mass
NASA Astrophysics Data System (ADS)
Lipatov, A. S.
This report is devoted to the current status of the hybrid multiscale simulation technique. The different aspects of modeling are discussed. In particular, we consider the different level for description of the plasma model, however, the main attention will be paid to conventional hybrid models. We discuss the main steps of time integration the Vlasov/Maxwell system of equations. The main attention will be paid to the models with finite electron mass. Such model may allow us to explore the plasma system with multiscale phenomena ranging from ion to electron scales. As an application of hybrid modeling technique we consider the simulation of the plasma processes at the collisionless shocks and very shortly ther magnetic field reconnection processes.
Energy models and national energy policy
NASA Astrophysics Data System (ADS)
Bloyd, Cary N.; Streets, David G.; Fisher, Ronald E.
1990-01-01
As work begins on the development of a new National Energy Strategy (NES), the role of energy models is becoming increasingly important. Such models are needed to determine and assess both the short and long term effects of new policy initiatives on U.S. energy supply and demand. A central purpose of the model is to translate overall energy strategy goals into policy options while identifying potential costs and environmental benefits. Three models currently being utilized in the NES process are described, followed by a detailed listing of the publicly identified NES goals. These goals are then viewed in light of the basic modeling scenarios that were proposed as part of the NES development process.
Yang, Muer; Fry, Michael J; Raikhelkar, Jayashree; Chin, Cynthia; Anyanwu, Anelechi; Brand, Jordan; Scurlock, Corey
2013-02-01
To develop queuing and simulation-based models to understand the relationship between ICU bed availability and operating room schedule to maximize the use of critical care resources and minimize case cancellation while providing equity to patients and surgeons. Retrospective analysis of 6-month unit admission data from a cohort of cardiothoracic surgical patients, to create queuing and simulation-based models of ICU bed flow. Three different admission policies (current admission policy, shortest-processing-time policy, and a dynamic policy) were then analyzed using simulation models, representing 10 yr worth of potential admissions. Important output data consisted of the "average waiting time," a proxy for unit efficiency, and the "maximum waiting time," a surrogate for patient equity. A cardiothoracic surgical ICU in a tertiary center in New York, NY. Six hundred thirty consecutive cardiothoracic surgical patients admitted to the cardiothoracic surgical ICU. None. Although the shortest-processing-time admission policy performs best in terms of unit efficiency (0.4612 days), it did so at expense of patient equity prolonging surgical waiting time by as much as 21 days. The current policy gives the greatest equity but causes inefficiency in unit bed-flow (0.5033 days). The dynamic policy performs at a level (0.4997 days) 8.3% below that of the shortest-processing-time in average waiting time; however, it balances this with greater patient equity (maximum waiting time could be shortened by 4 days compared to the current policy). Queuing theory and computer simulation can be used to model case flow through a cardiothoracic operating room and ICU. A dynamic admission policy that looks at current waiting time and expected ICU length of stay allows for increased equity between patients with only minimum losses of efficiency. This dynamic admission policy would seem to be a superior in maximizing case-flow. These results may be generalized to other surgical ICUs.
The Use of Regulatory Air Quality Models to Develop Successful Ozone Attainment Strategies
NASA Astrophysics Data System (ADS)
Canty, T. P.; Salawitch, R. J.; Dickerson, R. R.; Ring, A.; Goldberg, D. L.; He, H.; Anderson, D. C.; Vinciguerra, T.
2015-12-01
The Environmental Protection Agency (EPA) recently proposed lowering the 8-hr ozone standard to between 65-70 ppb. Not all regions of the U.S. are in attainment of the current 75 ppb standard and it is expected that many regions currently in attainment will not meet the future, lower surface ozone standard. Ozone production is a nonlinear function of emissions, biological processes, and weather. Federal and state agencies rely on regulatory air quality models such as the Community Multi-Scale Air Quality (CMAQ) model and Comprehensive Air Quality Model with Extensions (CAMx) to test ozone precursor emission reduction strategies that will bring states into compliance with the National Ambient Air Quality Standards (NAAQS). We will describe various model scenarios that simulate how future limits on emission of ozone precursors (i.e. NOx and VOCs) from sources such as power plants and vehicles will affect air quality. These scenarios are currently being developed by states required to submit a State Implementation Plan to the EPA. Projections from these future case scenarios suggest that strategies intended to control local ozone may also bring upwind states into attainment of the new NAAQS. Ground based, aircraft, and satellite observations are used to ensure that air quality models accurately represent photochemical processes within the troposphere. We will highlight some of the improvements made to the CMAQ and CAMx model framework based on our analysis of NASA observations obtained by the OMI instrument on the Aura satellite and by the DISCOVER-AQ field campaign.
Is Word Shape Still in Poor Shape for the Race to the Lexicon?
ERIC Educational Resources Information Center
Hill, Jessica C.
2010-01-01
Current models of normal reading behavior emphasize not only the recognition and processing of the word being fixated (n) but also processing of the upcoming parafoveal word (n + 1). Gaze contingent displays employing the boundary paradigm often mask words in order to understand how much and what type of processing is completed on the parafoveal…
Dendritic excitability modulates dendritic information processing in a purkinje cell model.
Coop, Allan D; Cornelis, Hugo; Santamaria, Fidel
2010-01-01
Using an electrophysiological compartmental model of a Purkinje cell we quantified the contribution of individual active dendritic currents to processing of synaptic activity from granule cells. We used mutual information as a measure to quantify the information from the total excitatory input current (I(Glu)) encoded in each dendritic current. In this context, each active current was considered an information channel. Our analyses showed that most of the information was encoded by the calcium (I(CaP)) and calcium activated potassium (I(Kc)) currents. Mutual information between I(Glu) and I(CaP) and I(Kc) was sensitive to different levels of excitatory and inhibitory synaptic activity that, at the same time, resulted in the same firing rate at the soma. Since dendritic excitability could be a mechanism to regulate information processing in neurons we quantified the changes in mutual information between I(Glu) and all Purkinje cell currents as a function of the density of dendritic Ca (g(CaP)) and Kca (g(Kc)) conductances. We extended our analysis to determine the window of temporal integration of I(Glu) by I(CaP) and I(Kc) as a function of channel density and synaptic activity. The window of information integration has a stronger dependence on increasing values of g(Kc) than on g(CaP), but at high levels of synaptic stimulation information integration is reduced to a few milliseconds. Overall, our results show that different dendritic conductances differentially encode synaptic activity and that dendritic excitability and the level of synaptic activity regulate the flow of information in dendrites.
An analysis of electrical conductivity model in saturated porous media
NASA Astrophysics Data System (ADS)
Cai, J.; Wei, W.; Qin, X.; Hu, X.
2017-12-01
Electrical conductivity of saturated porous media has numerous applications in many fields. In recent years, the number of theoretical methods to model electrical conductivity of complex porous media has dramatically increased. Nevertheless, the process of modeling the spatial conductivity distributed function continues to present challenges when these models used in reservoirs, particularly in porous media with strongly heterogeneous pore-space distributions. Many experiments show a more complex distribution of electrical conductivity data than the predictions derived from the experiential model. Studies have observed anomalously-high electrical conductivity of some low-porosity (tight) formations compared to more- porous reservoir rocks, which indicates current flow in porous media is complex and difficult to predict. Moreover, the change of electrical conductivity depends not only on the pore volume fraction but also on several geometric properties of the more extensive pore network, including pore interconnection and tortuosity. In our understanding of electrical conductivity models in porous media, we study the applicability of several well-known methods/theories to electrical characteristics of porous rocks as a function of pore volume, tortuosity and interconnection, to estimate electrical conductivity based on the micro-geometrical properties of rocks. We analyze the state of the art of scientific knowledge and practice for modeling porous structural systems, with the purpose of identifying current limitations and defining a blueprint for future modeling advances. We compare conceptual descriptions of electrical current flow processes in pore space considering several distinct modeling approaches. Approaches to obtaining more reasonable electrical conductivity models are discussed. Experiments suggest more complex relationships between electrical conductivity and porosity than experiential models, particularly in low-porosity formations. However, the available theoretical models combined with simulations do provide insight to how microscale physics affects macroscale electrical conductivity in porous media.
Improving operational anodising process performance using simulation approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liong, Choong-Yeun, E-mail: lg@ukm.edu.my; Ghazali, Syarah Syahidah, E-mail: syarah@gapps.kptm.edu.my
The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist ofmore » five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.« less
Suppression of Literal Meanings in L2 Idiom Processing: Does Context Help?
ERIC Educational Resources Information Center
Cieslicka, Anna B.
2011-01-01
Most current idiom processing models acknowledge, after Gernsbacher and Robertson (1999) that deriving an idiomatic meaning entails suppression of contextually inappropriate, literal meanings of idiom constituent words. While embedding idioms in the rich disambiguating context can promote earlier suppression of incompatible literal meanings,…
USDA-ARS?s Scientific Manuscript database
Critical to the use of modeling tools for the hydraulic analysis of surface irrigation systems is characterizing the infiltration and hydraulic resistance process. Since those processes are still not well understood, various formulations are currently used to represent them. A software component h...
Recognition Memory: Adding a Response Deadline Eliminates Recollection but Spares Familiarity
ERIC Educational Resources Information Center
Sauvage, Magdalena M.; Beer, Zachery; Eichenbaum, Howard
2010-01-01
A current controversy in memory research concerns whether recognition is supported by distinct processes of familiarity and recollection, or instead by a single process wherein familiarity and recollection reflect weak and strong memories, respectively. Recent studies using receiver operating characteristic (ROC) analyses in an animal model have…
Applying AI to the Writer's Learning Environment.
ERIC Educational Resources Information Center
Houlette, Forrest
1991-01-01
Discussion of current applications of artificial intelligence (AI) to writing focuses on how to represent knowledge of the writing process in a way that links procedural knowledge to other types of knowledge. A model is proposed that integrates the subtasks of writing into the process of writing itself. (15 references) (LRW)
Detailed Characterization of Nearshore Processes During NCEX
NASA Astrophysics Data System (ADS)
Holland, K.; Kaihatu, J. M.; Plant, N.
2004-12-01
Recent technology advances have allowed the coupling of remote sensing methods with advanced wave and circulation models to yield detailed characterizations of nearshore processes. This methodology was demonstrated as part of the Nearshore Canyon EXperiment (NCEX) in La Jolla, CA during Fall 2003. An array of high-resolution, color digital cameras was installed to monitor an alongshore distance of nearly 2 km out to depths of 25 m. This digital imagery was analyzed over the three-month period through an automated process to produce hourly estimates of wave period, wave direction, breaker height, shoreline position, sandbar location, and bathymetry at numerous locations during daylight hours. Interesting wave propagation patterns in the vicinity of the canyons were observed. In addition, directional wave spectra and swash / surf flow velocities were estimated using more computationally intensive methods. These measurements were used to provide forcing and boundary conditions for the Delft3D wave and circulation model, giving additional estimates of nearshore processes such as dissipation and rip currents. An optimal approach for coupling these remotely sensed observations to the numerical model was selected to yield accurate, but also timely characterizations. This involved assimilation of directional spectral estimates near the offshore boundary to mimic forcing conditions achieved under traditional approaches involving nested domains. Measurements of breaker heights and flow speeds were also used to adaptively tune model parameters to provide enhanced accuracy. Comparisons of model predictions and video observations show significant correlation. As compared to nesting within larger-scale and coarser resolution models, the advantages of providing boundary conditions data using remote sensing is much improved resolution and fidelity. For example, rip current development was both modeled and observed. These results indicate that this approach to data-model coupling is tenable and may be useful in near-real-time characterizations required by many applied scenarios.
Tree injury and mortality in fires: developing process-based models
Bret W. Butler; Matthew B. Dickinson
2010-01-01
Wildland fire managers are often required to predict tree injury and mortality when planning a prescribed burn or when considering wildfire management options; and, currently, statistical models based on post-fire observations are the only tools available for this purpose. Implicit in the derivation of statistical models is the assumption that they are strictly...
Kathleen L. Kavanaugh; Matthew B. Dickinson; Anthony S. Bova
2010-01-01
Current operational methods for predicting tree mortality from fire injury are regression-based models that only indirectly consider underlying causes and, thus, have limited generality. A better understanding of the physiological consequences of tree heating and injury are needed to develop biophysical process models that can make predictions under changing or novel...
Techniques for Down-Sampling a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces
A Communication Model for Teaching a Course in Mass Media and Society.
ERIC Educational Resources Information Center
Crumley, Wilma; Stricklin, Michael
Many professors of mass media and society courses have relied on a teaching model implying that students are sponges soaking up information. A more appropriate model invites concern with an active audience, transaction, the interpersonal mass media mix, a general systems approach, and process and change--in other words, utilization of current and…
Implementation of channel-routing routines in the Water Erosion Prediction Project (WEPP) model
Li Wang; Joan Q. Wu; William J. Elliott; Shuhui Dun; Sergey Lapin; Fritz R. Fiedler; Dennis C. Flanagan
2010-01-01
The Water Erosion Prediction Project (WEPP) model is a process-based, continuous-simulation, watershed hydrology and erosion model. It is an important tool for water erosion simulation owing to its unique functionality in representing diverse landuse and management conditions. Its applicability is limited to relatively small watersheds since its current version does...
ERIC Educational Resources Information Center
Myers, Steve
2007-01-01
This article critically analyses the AIM Assessment Model for children who have sexually harmful behaviour, exploring the underpinning knowledge and the processes involved. The model reflects current trends in the assessment of children, in child welfare and criminal justice services, producing categories of risk that lead to levels of…
D. Todd Jones-Farrand; Todd M. Fearer; Wayne E. Thogmartin; Frank R. Thompson; Mark D. Nelson; John M. Tirpak
2011-01-01
Selection of a modeling approach is an important step in the conservation planning process, but little guidance is available. We compared two statistical and three theoretical habitat modeling approaches representing those currently being used for avian conservation planning at landscape and regional scales: hierarchical spatial count (HSC), classification and...
Modeling of heat transfer in compacted machining chips during friction consolidation process
NASA Astrophysics Data System (ADS)
Abbas, Naseer; Deng, Xiaomin; Li, Xiao; Reynolds, Anthony
2018-04-01
The current study aims to provide an understanding of the heat transfer process in compacted aluminum alloy AA6061 machining chips during the friction consolidation process (FCP) through experimental investigations and mathematical modelling and numerical simulation. Compaction and friction consolidation of machining chips is the first stage of the Friction Extrusion Process (FEP), which is a novel method for recycling machining chips to produce useful products such as wires. In this study, compacted machining chips are modelled as a continuum whose material properties vary with density during friction consolidation. Based on density and temperature dependent thermal properties, the temperature field in the chip material and process chamber caused by frictional heating during the friction consolidation process is predicted. The predicted temperature field is found to compare well with temperature measurements at select points where such measurements can be made using thermocouples.
Zolfaghari, Mehdi; Drogui, Patrick; Blais, Jean François
2018-03-01
Electro-oxidation process by niobium boron-doped diamond (Nb/BDD) electrode was used to treat non-biodegradable oily wastewater provided from soil leachate contaminated by hydrocarbons. Firstly, the diffusion current limit and mass transfer coefficient was experimentally measured (7.1 mA cm -2 and 14.7 μm s -1 , respectively), in order to understand minimum applied current density. Later on, the oxidation kinetic model of each pollutant was investigated in different current densities ranged between 3.8 and 61.5 mA cm -2 . It was observed that direct oxidation was the main removal mechanism of organic and inorganic carbon, while the indirect oxidation in higher current density was responsible for nitrogen oxidation. Hydrocarbon in the form of colloidal particles could be removed by electro-flotation. On the other hand, electro-decomposition on the surface of cathode and precipitation by hydroxyl ions were the utmost removal pathway of metals. According to the initial experiments, operating condition was further optimized by central composite design model in different current density, treatment time, and electrolyte addition, based on the best responses on the specific energy consumption (SEC), chemical oxygen demand (COD), and total organic carbon (TOC) removal efficiency. Unde r optimum operating condition (current density = 23.1 mA cm -2 , time = 120 min, Ti/Pt as a cathode, and Nb/BDD as the anode), electro-oxidation showed the following removal efficiencies: COD (84.6%), TOC (68.2%), oil and grease (99%), color (87.9%), total alkalinity (92%), N tot (18%), NH 4 + (31%), Ca (66.4%), Fe (71.1%), Mg (41.4%), Mn (78.1%), P tot (75%), S (67.1%), and Si (19.1%). Graphical abstract Environmental significance statement Soil treatment facilities are rapidly grown throughout the world, especially in North America due to its intense industrialization. High water content soil in humid area like Canada produces significant amount of leachate which is difficult to remove by physical and biological processes. Current treatment facility was modified by applying the electro-chemical oxidation process. The kinetic models of each macro-pollutant included carbon, nitrogen, phosphorous, and metals were developed to investigate their oxidation mechanism (graphical abstract). The efficiency of treatment was monitored in order to optimize the decisive operating parameters of electro-oxidation process. The result of this article could pave the way of future investigation on efficient treatment of variety of oily wastewater.
Automatic Processing of Reactive Polymers
NASA Technical Reports Server (NTRS)
Roylance, D.
1985-01-01
A series of process modeling computer codes were examined. The codes use finite element techniques to determine the time-dependent process parameters operative during nonisothermal reactive flows such as can occur in reaction injection molding or composites fabrication. The use of these analytical codes to perform experimental control functions is examined; since the models can determine the state of all variables everywhere in the system, they can be used in a manner similar to currently available experimental probes. A small but well instrumented reaction vessel in which fiber-reinforced plaques are cured using computer control and data acquisition was used. The finite element codes were also extended to treat this particular process.
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.
1975-01-01
An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.
New approach to the design of Schottky barrier diodes for THz mixers
NASA Technical Reports Server (NTRS)
Jelenski, A.; Grueb, A.; Krozer, V.; Hartnagel, H. L.
1992-01-01
Near-ideal GaAs Schottky barrier diodes especially designed for mixing applications in the THz frequency range are presented. A diode fabrication process for submicron diodes with near-ideal electrical and noise characteristics is described. This process is based on the electrolytic pulse etching of GaAs in combination with an in-situ platinum plating for the formation of the Schottky contacts. Schottky barrier diodes with a diameter of 1 micron fabricated by the process have already shown excellent results in a 650 GHz waveguide mixer at room temperature. A conversion loss of 7.5 dB and a mixer noise temperature of less than 2000 K have been obtained at an intermediate frequency of 4 GHz. The optimization of the diode structure and the technology was possible due to the development of a generalized Schottky barrier diode model which is valid also at high current densities. The common diode design and optimization is discussed on the basis of the classical theory. However, the conventional fomulas are valid only in a limited forward bias range corresponding to currents much smaller than the operating currents under submillimeter mixing conditions. The generalized new model takes into account not only the phenomena occurring at the junction such as current dependent recombination and drift/diffusion velocities, but also mobility and electron temperature variations in the undepleted epi-layer. Calculated diode I/V and noise characteristics are in excellent agreement with the measured values. Thus, the model offers the possibility of optimizing the diode structure and predicting the diode performance under mixing conditions at THz frequencies.
Using Terrain Analysis and Remote Sensing to Improve Snow Mass Balance and Runoff Prediction
NASA Astrophysics Data System (ADS)
Venteris, E. R.; Coleman, A. M.; Wigmosta, M. S.
2010-12-01
Approximately 70-80% of the water in the international Columbia River basin is sourced from snowmelt. The demand for this water has competing needs, as it is used for agricultural irrigation, municipal, hydro and nuclear power generation, and environmental in-stream flow requirements. Accurate forecasting of water supply is essential for planning current needs and prediction of future demands due to growth and climate change. A significant limitation on current forecasting is spatial and temporal uncertainty in snowpack characteristics, particularly snow water equivalent. Currently, point measurements of snow mass balance are provided by the NRCS SNOTEL network. Each site consists of a snow mass sensor and meteorology station that monitors snow water equivalent, snow depth, precipitation, and temperature. There are currently 152 sites in the mountains of Oregon and Washington. An important step in improving forecasts is determining how representative each SNOTEL site is of the total mass balance of the watershed through a full accounting of the spatiotemporal variability in snowpack processes. This variation is driven by the interaction between meteorological processes, land cover, and landform. Statistical and geostatistical spatial models relate the state of the snowpack (characterized through SNOTEL, snow course measurements, and multispectral remote sensing) to terrain attributes derived from digital elevation models (elevation, aspect, slope, compound topographic index, topographic shading, etc.) and land cover. Time steps representing the progression of the snow season for several meteorologically distinct water years are investigated to identify and quantify dominant physical processes. The spatially distributed snow balance data can be used directly as model inputs to improve short- and long-range hydrologic forecasts.
NASA Astrophysics Data System (ADS)
Tian, Yingtao; Robson, Joseph D.; Riekehr, Stefan; Kashaev, Nikolai; Wang, Li; Lowe, Tristan; Karanika, Alexandra
2016-07-01
Laser welding of advanced Al-Li alloys has been developed to meet the increasing demand for light-weight and high-strength aerospace structures. However, welding of high-strength Al-Li alloys can be problematic due to the tendency for hot cracking. Finding suitable welding parameters and filler material for this combination currently requires extensive and costly trial and error experimentation. The present work describes a novel coupled model to predict hot crack susceptibility (HCS) in Al-Li welds. Such a model can be used to shortcut the weld development process. The coupled model combines finite element process simulation with a two-level HCS model. The finite element process model predicts thermal field data for the subsequent HCS hot cracking prediction. The model can be used to predict the influences of filler wire composition and welding parameters on HCS. The modeling results have been validated by comparing predictions with results from fully instrumented laser welds performed under a range of process parameters and analyzed using high-resolution X-ray tomography to identify weld defects. It is shown that the model is capable of accurately predicting the thermal field around the weld and the trend of HCS as a function of process parameters.
Nonrational processes in ethical decision making.
Rogerson, Mark D; Gottlieb, Michael C; Handelsman, Mitchell M; Knapp, Samuel; Younggren, Jeffrey
2011-10-01
Most current ethical decision-making models provide a logical and reasoned process for making ethical judgments, but these models are empirically unproven and rely upon assumptions of rational, conscious, and quasilegal reasoning. Such models predominate despite the fact that many nonrational factors influence ethical thought and behavior, including context, perceptions, relationships, emotions, and heuristics. For example, a large body of behavioral research has demonstrated the importance of automatic intuitive and affective processes in decision making and judgment. These processes profoundly affect human behavior and lead to systematic biases and departures from normative theories of rationality. Their influence represents an important but largely unrecognized component of ethical decision making. We selectively review this work; provide various illustrations; and make recommendations for scientists, trainers, and practitioners to aid them in integrating the understanding of nonrational processes with ethical decision making.
Human Modeling for Ground Processing Human Factors Engineering Analysis
NASA Technical Reports Server (NTRS)
Stambolian, Damon B.; Lawrence, Brad A.; Stelges, Katrine S.; Steady, Marie-Jeanne O.; Ridgwell, Lora C.; Mills, Robert E.; Henderson, Gena; Tran, Donald; Barth, Tim
2011-01-01
There have been many advancements and accomplishments over the last few years using human modeling for human factors engineering analysis for design of spacecraft. The key methods used for this are motion capture and computer generated human models. The focus of this paper is to explain the human modeling currently used at Kennedy Space Center (KSC), and to explain the future plans for human modeling for future spacecraft designs
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Sovers, O. J.
1994-01-01
The standard tropospheric calibration model implemented in the operational Orbit Determination Program is the seasonal model developed by C. C. Chao in the early 1970's. The seasonal model has seen only slight modification since its release, particularly in the format and content of the zenith delay calibrations. Chao's most recent standard mapping tables, which are used to project the zenith delay calibrations along the station-to-spacecraft line of sight, have not been modified since they were first published in late 1972. This report focuses principally on proposed upgrades to the zenith delay mapping process, although modeling improvements to the zenith delay calibration process are also discussed. A number of candidate approximation models for the tropospheric mapping are evaluated, including the semi-analytic mapping function of Lanyi, and the semi-empirical mapping functions of Davis, et. al.('CfA-2.2'), of Ifadis (global solution model), of Herring ('MTT'), and of Niell ('NMF'). All of the candidate mapping functions are superior to the Chao standard mapping tables and approximation formulas when evaluated against the current Deep Space Network Mark 3 intercontinental very long baselines interferometry database.
NASA Astrophysics Data System (ADS)
Martinec, Zdeněk; Velímský, Jakub; Haagmans, Roger; Šachl, Libor
2018-02-01
This study deals with the analysis of Swarm vector magnetic field measurements in order to estimate the magnetic field of magnetospheric ring current. For a single Swarm satellite, the magnetic measurements are processed by along-track spectral analysis on a track-by-track basis. The main and lithospheric magnetic fields are modelled by the CHAOS-6 field model and subtracted from the along-track Swarm magnetic data. The mid-latitude residual signal is then spectrally analysed and extrapolated to the polar regions. The resulting model of the magnetosphere (model MME) is compared to the existing Swarm Level 2 magnetospheric field model (MMA_SHA_2C). The differences of up to 10 nT are found on the nightsides Swarm data from 2014 April 8 to May 10, which are due to different processing schemes used to construct the two magnetospheric magnetic field models. The forward-simulated magnetospheric magnetic field generated by the external part of model MME then demonstrates the consistency of the separation of the Swarm along-track signal into the external and internal parts by the two-step along-track spectral analysis.
Observations & modeling of solar-wind/magnetospheric interactions
NASA Astrophysics Data System (ADS)
Hoilijoki, Sanni; Von Alfthan, Sebastian; Pfau-Kempf, Yann; Palmroth, Minna; Ganse, Urs
2016-07-01
The majority of the global magnetospheric dynamics is driven by magnetic reconnection, indicating the need to understand and predict reconnection processes and their global consequences. So far, global magnetospheric dynamics has been simulated using mainly magnetohydrodynamic (MHD) models, which are approximate but fast enough to be executed in real time or near-real time. Due to their fast computation times, MHD models are currently the only possible frameworks for space weather predictions. However, in MHD models reconnection is not treated kinetically. In this presentation we will compare the results from global kinetic (hybrid-Vlasov) and global MHD simulations. Both simulations are compared with in-situ measurements. We will show that the kinetic processes at the bow shock, in the magnetosheath and at the magnetopause affect global dynamics even during steady solar wind conditions. Foreshock processes cause an asymmetry in the magnetosheath plasma, indicating that the plasma entering the magnetosphere is not symmetrical on different sides of the magnetosphere. Behind the bow shock in the magnetosheath kinetic wave modes appear. Some of these waves propagate to the magnetopause and have an effect on the magnetopause reconnection. Therefore we find that kinetic phenomena have a significant role in the interaction between the solar wind and the magnetosphere. While kinetic models cannot be executed in real time currently, they could be used to extract heuristics to be added in the faster MHD models.
NASA Astrophysics Data System (ADS)
Vanclooster, Marnik
2010-05-01
The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.
Working memory, situation models, and synesthesia
Radvansky, Gabriel A.; Gibson, Bradley S.; McNerney, M. Windy
2013-03-04
Research on language comprehension suggests a strong relationship between working memory span measures and language comprehension. However, there is also evidence that this relationship weakens at higher levels of comprehension, such as the situation model level. The current study explored this relationship by comparing 10 grapheme–color synesthetes who have additional color experiences when they read words that begin with different letters and 48 normal controls on a number of tests of complex working memory capacity and processing at the situation model level. On all tests of working memory capacity, the synesthetes outperformed the controls. Importantly, there was no carryover benefitmore » for the synesthetes for processing at the situation model level. This reinforces the idea that although some aspects of language comprehension are related to working memory span scores, this applies less directly to situation model levels. As a result, this suggests that theories of working memory must take into account this limitation, and the working memory processes that are involved in situation model construction and processing must be derived.« less
Working memory, situation models, and synesthesia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radvansky, Gabriel A.; Gibson, Bradley S.; McNerney, M. Windy
Research on language comprehension suggests a strong relationship between working memory span measures and language comprehension. However, there is also evidence that this relationship weakens at higher levels of comprehension, such as the situation model level. The current study explored this relationship by comparing 10 grapheme–color synesthetes who have additional color experiences when they read words that begin with different letters and 48 normal controls on a number of tests of complex working memory capacity and processing at the situation model level. On all tests of working memory capacity, the synesthetes outperformed the controls. Importantly, there was no carryover benefitmore » for the synesthetes for processing at the situation model level. This reinforces the idea that although some aspects of language comprehension are related to working memory span scores, this applies less directly to situation model levels. As a result, this suggests that theories of working memory must take into account this limitation, and the working memory processes that are involved in situation model construction and processing must be derived.« less
Bottom currents and sediment transport in Long Island Sound: A modeling study
Signell, R.P.; List, J.H.; Farris, A.S.
2000-01-01
A high resolution (300-400 m grid spacing), process oriented modeling study was undertaken to elucidate the physical processes affecting the characteristics and distribution of sea-floor sedimentary environments in Long Island Sound. Simulations using idealized forcing and high-resolution bathymetry were performed using a three-dimensional circulation model ECOM (Blumberg and Mellor, 1987) and a stationary shallow water wave model HISWA (Holthuijsen et al., 1989). The relative contributions of tide-, density-, wind- and wave-driven bottom currents are assessed and related to observed characteristics of the sea-floor environments, and simple bedload sediment transport simulations are performed. The fine grid spacing allows features with scales of several kilometers to be resolved. The simulations clearly show physical processes that affect the observed sea-floor characteristics at both regional and local scales. Simulations of near-bottom tidal currents reveal a strong gradient in the funnel-shaped eastern part of the Sound, which parallels an observed gradient in sedimentary environments from erosion or nondeposition, through bedload transport and sediment sorting, to fine-grained deposition. A simulation of estuarine flow driven by the along-axis gradient in salinity shows generally westward bottom currents of 2-4 cm/s that are locally enhanced to 6-8 cm/s along the axial depression of the Sound. Bottom wind-driven currents flow downwind along the shallow margins of the basin, but flow against the wind in the deeper regions. These bottom flows (in opposition to the wind) are strongest in the axial depression and add to the estuarine flow when winds are from the west. The combination of enhanced bottom currents due to both estuarine circulation and the prevailing westerly winds provide an explanation for the relatively coarse sediments found along parts of the axial depression. Climatological simulations of wave-driven bottom currents show that frequent high-energy events occur along the shallow margins of the Sound, explaining the occurrence of relatively coarse sediments in these regions. Bedload sediment transport calculations show that the estuarine circulation coupled with the oscillatory tidal currents result in a net westward transport of sand in much of the eastern Sound. Local departures from this regional westward trend occur around topographic and shoreline irregularities, and there is strong predicted convergence of bedload transport over most of the large, linear sand ridges in the eastern Sound, providing a mechanism which prevents their decay. The strong correlation between the near-bottom current intensity based on the model results and the sediment response, as indicated by the distribution of sedimentary environments, provides a framework for predicting the long-term effects of anthropogenic activities.
USING CMAQ FOR EXPOSURE MODELING AND CHARACTERIZING THE SUB-GRID VARIABILITY FOR EXPOSURE ESTIMATES
Atmospheric processes and the associated transport and dispersion of atmospheric pollutants are known to be highly variable in time and space. Current air quality models that characterize atmospheric chemistry effects, e.g. the Community Multi-scale Air Quality (CMAQ), provide vo...
Current advances in mathematical modeling of anti-cancer drug penetration into tumor tissues.
Kim, Munju; Gillies, Robert J; Rejniak, Katarzyna A
2013-11-18
Delivery of anti-cancer drugs to tumor tissues, including their interstitial transport and cellular uptake, is a complex process involving various biochemical, mechanical, and biophysical factors. Mathematical modeling provides a means through which to understand this complexity better, as well as to examine interactions between contributing components in a systematic way via computational simulations and quantitative analyses. In this review, we present the current state of mathematical modeling approaches that address phenomena related to drug delivery. We describe how various types of models were used to predict spatio-temporal distributions of drugs within the tumor tissue, to simulate different ways to overcome barriers to drug transport, or to optimize treatment schedules. Finally, we discuss how integration of mathematical modeling with experimental or clinical data can provide better tools to understand the drug delivery process, in particular to examine the specific tissue- or compound-related factors that limit drug penetration through tumors. Such tools will be important in designing new chemotherapy targets and optimal treatment strategies, as well as in developing non-invasive diagnosis to monitor treatment response and detect tumor recurrence.
The drift diffusion model as the choice rule in reinforcement learning.
Pedersen, Mads Lund; Frank, Michael J; Biele, Guido
2017-08-01
Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyperactivity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups.
The drift diffusion model as the choice rule in reinforcement learning
Frank, Michael J.
2017-01-01
Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyper-activity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups. PMID:27966103
Effects of clutter on information processing deficits in individuals with hoarding disorder.
Raines, Amanda M; Timpano, Kiara R; Schmidt, Norman B
2014-09-01
Current cognitive behavioral models of hoarding view hoarding as a multifaceted problem stemming from various information processing deficits. However, there is also reason to suspect that the consequences of hoarding may in turn impact or modulate deficits in information processing. The current study sought to expand upon the existing literature by manipulating clutter to examine whether the presence of a cluttered environment affects information processing. Participants included 34 individuals with hoarding disorder. Participants were randomized into a clutter or non-clutter condition and asked to complete various neuropsychological tasks of memory and attention. Results revealed that hoarding severity was associated with difficulties in sustained attention. However, individuals in the clutter condition relative to the non-clutter condition did not experience greater deficits in information processing. Limitations include the cross-sectional design and small sample size. The current findings add considerably to a growing body of literature on the relationships between information processing deficits and hoarding behaviors. Research of this type is integral to understanding the etiology and maintenance of hoarding. Copyright © 2014 Elsevier B.V. All rights reserved.
Shukla, Nagesh; Keast, John E; Ceglarek, Darek
2014-10-01
The modelling of complex workflows is an important problem-solving technique within healthcare settings. However, currently most of the workflow models use a simplified flow chart of patient flow obtained using on-site observations, group-based debates and brainstorming sessions, together with historic patient data. This paper presents a systematic and semi-automatic methodology for knowledge acquisition with detailed process representation using sequential interviews of people in the key roles involved in the service delivery process. The proposed methodology allows the modelling of roles, interactions, actions, and decisions involved in the service delivery process. This approach is based on protocol generation and analysis techniques such as: (i) initial protocol generation based on qualitative interviews of radiology staff, (ii) extraction of key features of the service delivery process, (iii) discovering the relationships among the key features extracted, and, (iv) a graphical representation of the final structured model of the service delivery process. The methodology is demonstrated through a case study of a magnetic resonance (MR) scanning service-delivery process in the radiology department of a large hospital. A set of guidelines is also presented in this paper to visually analyze the resulting process model for identifying process vulnerabilities. A comparative analysis of different workflow models is also conducted. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Strain-induced shear instability in Liverpool Bay
NASA Astrophysics Data System (ADS)
Wihsgott, Juliane; Palmer, Matthew R.
2013-04-01
Liverpool Bay is a shallow subsection of the eastern Irish Sea with large tides (10 m), which drive strong tidal currents (1 ms-1). The Bay is heavily influenced by large freshwater inputs from several Welsh and English rivers that maintain a strong and persistent horizontal density gradient. This gradient interacts with the sheared tidal currents to strain freshwater over denser pelagic water on a semi-diurnal frequency. This Strain-Induced-Periodic-Stratification (SIPS) has important implications on vertical and horizontal mixing. The subtle interaction between stratification and turbulence in this complex environment is shown to be of critical importance to freshwater transport, and subsequently the fate of associated biogeochemical and pollutant pathways. Recent work identified an asymmetry of current ellipses due to SIPS that increases shear instability in the halocline with the potential to enhance diapycnal mixing. Here, we use data from a short, high intensity process study which reveals this mid-water mechanism maintains prolonged periods of sub-critical gradient Richardson number (Ri ≤ ¼) that suggests shear instability is likely. A time series of measurements from a microstructure profiler identifies the associated increase in turbulence is short lived and 'patchy' but sufficient to promote diapycnal mixing. The significance of this mixing process is further investigated by comparing our findings with long-term observations from the Liverpool Bay Coastal Observatory. We identify that the conditions for shear instability during SIPS are regularly met and suggest that this process contributes to the current underestimates of near coastal mixing observed in regional models. To assist our understanding of the observed processes and to test the current capability of turbulence 'closure schemes' we employ a one-dimensional numerical model to investigate the physical mechanisms driving diapycnal mixing in Liverpool Bay.
A systematic petri net approach for multiple-scale modeling and simulation of biochemical processes.
Chen, Ming; Hu, Minjie; Hofestädt, Ralf
2011-06-01
A method to exploit hybrid Petri nets for modeling and simulating biochemical processes in a systematic way was introduced. Both molecular biology and biochemical engineering aspects are manipulated. With discrete and continuous elements, the hybrid Petri nets can easily handle biochemical factors such as metabolites concentration and kinetic behaviors. It is possible to translate both molecular biological behavior and biochemical processes workflow into hybrid Petri nets in a natural manner. As an example, penicillin production bioprocess is modeled to illustrate the concepts of the methodology. Results of the dynamic of production parameters in the bioprocess were simulated and observed diagrammatically. Current problems and post-genomic perspectives were also discussed.
Physical processes associated with current collection by plasma contactors
NASA Technical Reports Server (NTRS)
Katz, Ira; Davis, Victoria A.
1990-01-01
Recent flight data confirms laboratory observations that the release of neutral gas increases plasma sheath currents. Plasma contactors are devices which release a partially ionized gas in order to enhance the current flow between a spacecraft and the space plasma. Ionization of the expellant gas and the formation of a double layer between the anode plasma and the space plasma are the dominant physical processes. A theory is presented of the interaction between the contactor plasma and the background plasma. The conditions for formation of a double layer between the two plasmas are derived. Double layer formation is shown to be a consequence of the nonlinear response of the plasmas to changes in potential. Numerical calculations based upon this model are compared with laboratory measurements of current collection by hollow cathode-based plasma contactors.
ERIC Educational Resources Information Center
Schweppe, Judith; Rummer, Ralf
2014-01-01
Cognitive models of multimedia learning such as the Cognitive Theory of Multimedia Learning (Mayer 2009) or the Cognitive Load Theory (Sweller 1999) are based on different cognitive models of working memory (e.g., Baddeley 1986) and long-term memory. The current paper describes a working memory model that has recently gained popularity in basic…
“The Birthing From Within Holistic Sphere”: A Conceptual Model for Childbirth Education
England, Pam; Horowitz, Rob
2000-01-01
An expanded conceptual model of childbirth education is offered, proposing the benefits of balancing informative teaching processes with creative, experiential, introspective learning processes for parents. The application of these two teaching dimensions to exploring four different perspectives of birth (the mother's, the father's, the baby's, and the culture's) is discussed, along with examples from “Birthing From Within” classes. Implications for current practice and the evolving role of childbirth educator are noted. PMID:17273200
Markovian prediction of future values for food grains in the economic survey
NASA Astrophysics Data System (ADS)
Sathish, S.; Khadar Babu, S. K.
2017-11-01
Now-a-days prediction and forecasting are plays a vital role in research. For prediction, regression is useful to predict the future value and current value on production process. In this paper, we assume food grain production exhibit Markov chain dependency and time homogeneity. The economic generative performance evaluation the balance time artificial fertilization different level in Estrusdetection using a daily Markov chain model. Finally, Markov process prediction gives better performance compare with Regression model.
Bhansali, Archita H; Sangani, Darshan S; Mhatre, Shivani K; Sansgiry, Sujit S
2018-01-01
To compare three over-the-counter (OTC) Drug Facts panel versions for information processing optimization among college students. University of Houston students (N = 210) participated in a cross-sectional survey from January to May 2010. A current FDA label was compared to two experimental labels developed using the theory of CHREST to test information processing by re-positioning the warning information within the Drug Facts panel. Congruency was defined as placing like information together. Information processing was evaluated using the OTC medication Label Evaluation Process Model (LEPM): label comprehension, ease-of-use, attitude toward the product, product evaluation, and purchase intention. Experimental label with chunked congruent information (uses-directions-other information-warnings) was rated significantly higher than the current FDA label and had the best average scores among the LEPM information processing variables. If replications uphold these findings, the FDA label design might be revised to improve information processing.
Are there two processes in reasoning? The dimensionality of inductive and deductive inferences.
Stephens, Rachel G; Dunn, John C; Hayes, Brett K
2018-03-01
Single-process accounts of reasoning propose that the same cognitive mechanisms underlie inductive and deductive inferences. In contrast, dual-process accounts propose that these inferences depend upon 2 qualitatively different mechanisms. To distinguish between these accounts, we derived a set of single-process and dual-process models based on an overarching signal detection framework. We then used signed difference analysis to test each model against data from an argument evaluation task, in which induction and deduction judgments are elicited for sets of valid and invalid arguments. Three data sets were analyzed: data from Singmann and Klauer (2011), a database of argument evaluation studies, and the results of an experiment designed to test model predictions. Of the large set of testable models, we found that almost all could be rejected, including all 2-dimensional models. The only testable model able to account for all 3 data sets was a model with 1 dimension of argument strength and independent decision criteria for induction and deduction judgments. We conclude that despite the popularity of dual-process accounts, current results from the argument evaluation task are best explained by a single-process account that incorporates separate decision thresholds for inductive and deductive inferences. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Vašina, P; Hytková, T; Eliáš, M
2009-05-01
The majority of current models of the reactive magnetron sputtering assume a uniform shape of the discharge current density and the same temperature near the target and the substrate. However, in the real experimental set-up, the presence of the magnetic field causes high density plasma to form in front of the cathode in the shape of a toroid. Consequently, the discharge current density is laterally non-uniform. In addition to this, the heating of the background gas by sputtered particles, which is usually referred to as the gas rarefaction, plays an important role. This paper presents an extended model of the reactive magnetron sputtering that assumes the non-uniform discharge current density and which accommodates the gas rarefaction effect. It is devoted mainly to the study of the behaviour of the reactive sputtering rather that to the prediction of the coating properties. Outputs of this model are compared with those that assume uniform discharge current density and uniform temperature profile in the deposition chamber. Particular attention is paid to the modelling of the radial variation of the target composition near transitions from the metallic to the compound mode and vice versa. A study of the target utilization in the metallic and compound mode is performed for two different discharge current density profiles corresponding to typical two pole and multipole magnetics available on the market now. Different shapes of the discharge current density were tested. Finally, hysteresis curves are plotted for various temperature conditions in the reactor.
Assessment of the importance of the current-wave coupling in the shelf ocean forecasts
NASA Astrophysics Data System (ADS)
Jordà, G.; Bolaños, R.; Espino, M.; Sánchez-Arcilla, A.
2006-10-01
The effects of wave-current interactions on shelf ocean forecasts is investigated in the framework of the MFSTEP (Mediterranean Forecasting System Project Towards Enviromental Predictions) project. A one way sequential coupling approach is adopted to link the wave model (WAM) to the circulation model (SYMPHONIE). The coupling of waves and currents has been done considering four main processes: wave refraction due to currents, surface wind drag and bo€ttom drag modifications due to waves, and the wave induced mass flux. The coupled modelling system is implemented in the southern Catalan shelf (NW Mediterranean), a region with characteristics similar to most of the Mediterranean shelves. The sensitivity experiments are run in a typical operational configuration. The wave refraction by currents seems to be not very relevant in a microtidal context such as the western Mediterranean. The main effect of waves on current forecasts is through the modification of the wind drag. The Stokes drift also plays a significant role due to its spatial and temporal characteristics. Finally, the enhanced bottom friction is just noticeable in the inner shelf.
Extending the surrogacy analogy: applying the advance directive model to biobanks.
Solomon, Stephanie; Mongoven, Ann
2015-01-01
Biobank donors and biobank governance face a conceptual challenge akin to clinical patients and their designated surrogate decision-makers, the necessity of making decisions and policies now that must be implemented under future unknown circumstances. We propose that biobanks take advantage of this parallel to learn lessons from the historical trajectory of advance directives and develop models analogous to current 'best practice' advance directives such as Values Histories and TheFive Wishes. We suggest how such models could improve biobanks' engagement both with communities and with individual donors by being more honest about the limits of current disclosure and eliciting information to ensure the protection of donor interests more robustly through time than current 'informed consent' processes in biobanking. © 2014 S. Karger AG, Basel.
Future warming patterns linked to today’s climate variability
Dai, Aiguo
2016-01-11
The reliability of model projections of greenhouse gas (GHG)-induced future climate change is often assessed based on models’ ability to simulate the current climate, but there has been little evidence that connects the two. In fact, this practice has been questioned because the GHG-induced future climate change may involve additional physical processes that are not important for the current climate. Here I show that the spatial patterns of the GHG-induced future warming in the 21 st century is highly correlated with the patterns of the year-to-year variations of surface air temperature for today’s climate, with areas of larger variations duringmore » 1950–1979 having more GHG-induced warming in the 21 st century in all climate models. Such a relationship also exists in other climate fields such as atmospheric water vapor, and it is evident in observed temperatures from 1950–2010. The results suggest that many physical processes may work similarly in producing the year-to-year climate variations in the current climate and the GHG-induced long-term changes in the 21 st century in models and in the real world. Furthermore, they support the notion that models that simulate present-day climate variability better are likely to make more reliable predictions of future climate change.« less
Neumann, Verena
2016-01-01
A biophysical model of the excitation-contraction pathway, which has previously been validated for slow-twitch and fast-twitch skeletal muscles, is employed to investigate key biophysical processes leading to peripheral muscle fatigue. Special emphasis hereby is on investigating how the model's original parameter sets can be interpolated such that realistic behaviour with respect to contraction time and fatigue progression can be obtained for a continuous distribution of the model's parameters across the muscle units, as found for the functional properties of muscles. The parameters are divided into 5 groups describing (i) the sarcoplasmatic reticulum calcium pump rate, (ii) the cross-bridge dynamics rates, (iii) the ryanodine receptor calcium current, (iv) the rates of binding of magnesium and calcium ions to parvalbumin and corresponding dissociations, and (v) the remaining processes. The simulations reveal that the first two parameter groups are sensitive to contraction time but not fatigue, the third parameter group affects both considered properties, and the fourth parameter group is only sensitive to fatigue progression. Hence, within the scope of the underlying model, further experimental studies should investigate parvalbumin dynamics and the ryanodine receptor calcium current to enhance the understanding of peripheral muscle fatigue. PMID:27980606
Future warming patterns linked to today’s climate variability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Aiguo
The reliability of model projections of greenhouse gas (GHG)-induced future climate change is often assessed based on models’ ability to simulate the current climate, but there has been little evidence that connects the two. In fact, this practice has been questioned because the GHG-induced future climate change may involve additional physical processes that are not important for the current climate. Here I show that the spatial patterns of the GHG-induced future warming in the 21 st century is highly correlated with the patterns of the year-to-year variations of surface air temperature for today’s climate, with areas of larger variations duringmore » 1950–1979 having more GHG-induced warming in the 21 st century in all climate models. Such a relationship also exists in other climate fields such as atmospheric water vapor, and it is evident in observed temperatures from 1950–2010. The results suggest that many physical processes may work similarly in producing the year-to-year climate variations in the current climate and the GHG-induced long-term changes in the 21 st century in models and in the real world. Furthermore, they support the notion that models that simulate present-day climate variability better are likely to make more reliable predictions of future climate change.« less
Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio
2017-07-01
Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as geographical areas subjected to novel climates are expected to arise, they must be reported as they show less accurate predictions under future climate scenarios. Consequently, environmental extrapolation and dispersal processes should be explicitly incorporated to report and reduce uncertainties in temporal predictions of SDMs, respectively. Doing so, we expect to improve the reliability of the information we provide for conservation decision makers under future climate change scenarios. © 2017 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Okawa, Tsutomu; Kaminishi, Tsukasa; Hirabayashi, Syuichi; Suzuki, Ryo; Mitsui, Hiroyasu; Koizumi, Hisao
The business in the enterprise is closely related with the information system to such an extent that the business activities are difficult without the information system. The system design technique that considers the business process well, and that enables a quick system development is requested. In addition, the demand for the development cost is also severe than before. To cope with the current situation, the modeling technology named BPM(Business Process Management/Modeling)is drawing attention and becoming important as a key technology. BPM is a technology to model business activities as business processes and visualize them to improve the business efficiency. However, a general methodology to develop the information system using the analysis result of BPM doesn't exist, and a few development cases are reported. This paper proposes an information system development method combining business process modeling with executable modeling. In this paper we describe a guideline to support consistency of development and development efficiency and the framework enabling to develop the information system from model. We have prototyped the information system with the proposed method and our experience has shown that the methodology is valuable.
Animal Models of Atherosclerosis
Getz, Godfrey S.; Reardon, Catherine A.
2012-01-01
Atherosclerosis is a chronic inflammatory disorder that is the underlying cause of most cardiovascular disease. Both cells of the vessel wall and cells of the immune system participate in atherogenesis. This process is heavily influenced by plasma lipoproteins, genetics and the hemodynamics of the blood flow in the artery. A variety of small and large animal models have been used to study the atherogenic process. No model is ideal as each has its own advantages and limitations with respect to manipulation of the atherogenic process and modeling human atherosclerosis or lipoprotein profile. Useful large animal models include pigs, rabbits and non-human primates. Due in large part to the relative ease of genetic manipulation and the relatively short time frame for the development of atherosclerosis, murine models are currently the most extensively used. While not all aspects of murine atherosclerosis are identical to humans, studies using murine models have suggested potential biological processes and interactions that underlie this process. As it becomes clear that different factors may influence different stages of lesion development, the use of mouse models with the ability to turn on or delete proteins or cells in tissue specific and temporal manner will be very valuable. PMID:22383700
NASA Astrophysics Data System (ADS)
D'Onofrio, Donatella; von Hardenberg, Jost; Baudena, Mara
2017-04-01
Many current Dynamic Global Vegetation Models (DGVMs), including those incorporated into Earth System Models (ESMs), are able to realistically reproduce the distribution of the most worldwide biomes. However, they display high uncertainty in predicting the forest, savanna and grassland distributions and the transitions between them in tropical areas. These biomes are the most productive terrestrial ecosystems, and owing to their different biogeophysical and biogeochemical characteristics, future changes in their distributions could have also impacts on climate states. In particular, expected increasing temperature and CO2, modified precipitation regimes, as well as increasing land-use intensity could have large impacts on global biogeochemical cycles and precipitation, affecting the land-climate interactions. The difficulty of the DGVMs in simulating tropical vegetation, especially savanna structure and occurrence, has been associated with the way they represent the ecological processes and feedbacks between biotic and abiotic conditions. The inclusion of appropriate ecological mechanisms under present climatic conditions is essential for obtaining reliable future projections of vegetation and climate states. In this work we analyse observed relationships of tree and grass cover with climate and fire, and the current ecological understanding of the mechanisms driving the forest-savanna-grassland transition in Africa to evaluate the outcomes of a current state-of-the-art DGVM and to assess which ecological processes need to be included or improved within the model. Specifically, we analyse patterns of woody and herbaceous cover and fire return times from MODIS satellite observations, rainfall annual average and seasonality from TRMM satellite measurements and tree phenology information from the ESA global land cover map, comparing them with the outcomes of the LPJ-GUESS DGVM, also used by the EC-Earth global climate model. The comparison analysis with the LPJ-GUESS simulations suggests possible improvements in the model representations of tree-grass competition for water and in the vegetation-fire interaction. The proposed method could be useful for evaluating DGVMs in tropical areas, especially in the phase of model setting-up, before the coupling with Earth System Models. This could help in improving the simulations of ecological processes and consequently of land-climate interactions.
NASA Astrophysics Data System (ADS)
de Paor, A. M.
Hide (Nonlinear Processes in Geophysics, 1998) has produced a new mathematical model of a self-exciting homopolar dynamo driving a series- wound motor, as a continuing contribution to the theory of the geomagnetic field. By a process of exact perturbation analysis, followed by combination and partial solution of differential equations, the complete nonlinear quenching of current fluctuations reported by Hide in the case that a parameter ɛ has the value 1 is proved via the Popov theorem from feedback system stability theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hajji, S.; HadjSalah, S.; Benhalima, A.
2016-06-15
This paper deals with the modelling of the convection processes in metal–halide lamp discharges (HgDyI{sub 3}). For this, we realized a 3D model, a steady, direct current powered and time-depending model for the solution of conservation equations relative to mass, momentum, and energy. After validation, this model was applied to the study of the effect of some parameters that have appeared on major transport phenomena of mass and energy in studying the lamp. Indeed, the electric current, the atomic ratio (Hg/Dy), and the effect of the convective transport have been studied.
NASA Astrophysics Data System (ADS)
Hajji, S.; HadjSalah, S.; Benhalima, A.; Charrada, K.; Zissis, G.
2016-06-01
This paper deals with the modelling of the convection processes in metal-halide lamp discharges (HgDyI3). For this, we realized a 3D model, a steady, direct current powered and time-depending model for the solution of conservation equations relative to mass, momentum, and energy. After validation, this model was applied to the study of the effect of some parameters that have appeared on major transport phenomena of mass and energy in studying the lamp. Indeed, the electric current, the atomic ratio (Hg/Dy), and the effect of the convective transport have been studied.
[Investigation of team processes that enhance team performance in business organization].
Nawata, Kengo; Yamaguchi, Hiroyuki; Hatano, Toru; Aoshima, Mika
2015-02-01
Many researchers have suggested team processes that enhance team performance. However, past team process models were based on crew team, whose all team members perform an indivisible temporary task. These models may be inapplicable business teams, whose individual members perform middle- and long-term tasks assigned to individual members. This study modified the teamwork model of Dickinson and McIntyre (1997) and aimed to demonstrate a whole team process that enhances the performance of business teams. We surveyed five companies (member N = 1,400, team N = 161) and investigated team-level-processes. Results showed that there were two sides of team processes: "communication" and "collaboration to achieve a goal." Team processes in which communication enhanced collaboration improved team performance with regard to all aspects of the quantitative objective index (e.g., current income and number of sales), supervisor rating, and self-rating measurements. On the basis of these results, we discuss the entire process by which teamwork enhances team performance in business organizations.
Asteroid families: Current situation
NASA Astrophysics Data System (ADS)
Cellino, A.; Dell'Oro, A.; Tedesco, E. F.
2009-02-01
Being the products of energetic collisional events, asteroid families provide a fundamental body of evidence to test the predictions of theoretical and numerical models of catastrophic disruption phenomena. The goal is to obtain, from current physical and dynamical data, reliable inferences on the original disruption events that produced the observed families. The main problem in doing this is recognizing, and quantitatively assessing, the importance of evolutionary phenomena that have progressively changed the observable properties of families, due to physical processes unrelated to the original disruption events. Since the early 1990s, there has been a significant evolution in our interpretation of family properties. New ideas have been conceived, primarily as a consequence of the development of refined models of catastrophic disruption processes, and of the discovery of evolutionary processes that had not been accounted for in previous studies. The latter include primarily the Yarkovsky and Yarkovsky-O'Keefe-Radzvieski-Paddack (YORP) effects - radiation phenomena that can secularly change the semi-major axis and the rotation state. We present a brief review of the current state of the art in our understanding of asteroid families, point out some open problems, and discuss a few likely directions for future developments.
Modelling the impacts of pests and diseases on agricultural systems.
Donatelli, M; Magarey, R D; Bregaglio, S; Willocquet, L; Whish, J P M; Savary, S
2017-07-01
The improvement and application of pest and disease models to analyse and predict yield losses including those due to climate change is still a challenge for the scientific community. Applied modelling of crop diseases and pests has mostly targeted the development of support capabilities to schedule scouting or pesticide applications. There is a need for research to both broaden the scope and evaluate the capabilities of pest and disease models. Key research questions not only involve the assessment of the potential effects of climate change on known pathosystems, but also on new pathogens which could alter the (still incompletely documented) impacts of pests and diseases on agricultural systems. Yield loss data collected in various current environments may no longer represent a adequate reference to develop tactical, decision-oriented, models for plant diseases and pests and their impacts, because of the ongoing changes in climate patterns. Process-based agricultural simulation modelling, on the other hand, appears to represent a viable methodology to estimate the impacts of these potential effects. A new generation of tools based on state-of-the-art knowledge and technologies is needed to allow systems analysis including key processes and their dynamics over appropriate suitable range of environmental variables. This paper offers a brief overview of the current state of development in coupling pest and disease models to crop models, and discusses technical and scientific challenges. We propose a five-stage roadmap to improve the simulation of the impacts caused by plant diseases and pests; i) improve the quality and availability of data for model inputs; ii) improve the quality and availability of data for model evaluation; iii) improve the integration with crop models; iv) improve the processes for model evaluation; and v) develop a community of plant pest and disease modelers.
NASA Technical Reports Server (NTRS)
Mocko, David M.; Sud, Y. C.; Einaudi, Franco (Technical Monitor)
2000-01-01
Present-day climate models produce large climate drifts that interfere with the climate signals simulated in modelling studies. The simplifying assumptions of the physical parameterization of snow and ice processes lead to large biases in the annual cycles of surface temperature, evapotranspiration, and the water budget, which in turn causes erroneous land-atmosphere interactions. Since land processes are vital for climate prediction, and snow and snowmelt processes have been shown to affect Indian monsoons and North American rainfall and hydrology, special attention is now being given to cold land processes and their influence on the simulated annual cycle in GCMs. The snow model of the SSiB land-surface model being used at Goddard has evolved from a unified single snow-soil layer interacting with a deep soil layer through a force-restore procedure to a two-layer snow model atop a ground layer separated by a snow-ground interface. When the snow cover is deep, force-restore occurs within the snow layers. However, several other simplifying assumptions such as homogeneous snow cover, an empirical depth related surface albedo, snowmelt and melt-freeze in the diurnal cycles, and neglect of latent heat of soil freezing and thawing still remain as nagging problems. Several important influences of these assumptions will be discussed with the goal of improving them to better simulate the snowmelt and meltwater hydrology. Nevertheless, the current snow model (Mocko and Sud, 2000, submitted) better simulates cold land processes as compared to the original SSiB. This was confirmed against observations of soil moisture, runoff, and snow cover in global GSWP (Sud and Mocko, 1999) and point-scale Valdai simulations over seasonal snow regions. New results from the current snow model SSiB from the 10-year PILPS 2e intercomparison in northern Scandinavia will be presented.
Principles of a multistack electrochemical wastewater treatment design
NASA Astrophysics Data System (ADS)
Elsahwi, Essam S.; Dawson, Francis P.; Ruda, Harry E.
2018-02-01
Electrolyzer stacks in a bipolar architecture (cells connected in series) are desirable since power provided to a stack can be transferred at high voltages and low currents and thus the losses in the power bus can be reduced. The anode electrodes (active electrodes) considered as part of this study are single sided but there are manufacturing cost advantages to implementing double side anodes in the future. One of the main concerns with a bipolar stack implementation is the existence of leakage currents (bypass currents). The leakage current is associated with current paths that are not between adjacent anode and cathode pairs. This leads to non uniform current density distributions which compromise the electrochemical conversion efficiency of the stack and can also lead to unwanted side reactions. The objective of this paper is to develop modelling tools for a bipolar architecture consisting of two single sided cells that use single sided anodes. It is assumed that chemical reactions are single electron transfer rate limited and that diffusion and convection effects can be ignored. The design process consists of the flowing two steps: development of a large signal model for the stack, and then the extraction of a small signal model from the large signal model. The small signal model facilitates the design of a controller that satisfies current or voltage regulation requirements. A model has been developed for a single cell and two cells in series but can be generalized to more than two cells in series and to incorporate double sided anode configurations in the future. The developed model is able to determine the leakage current and thus provide a quantitative assessment on the performance of the cell.
The Effect of Improved Sub-Daily Earth Rotation Models on Global GPS Data Processing
NASA Astrophysics Data System (ADS)
Yoon, S.; Choi, K. K.
2017-12-01
Throughout the various International GNSS Service (IGS) products, strong periodic signals have been observed around the 14 day period. This signal is clearly visible in all IGS time-series such as those related to orbit ephemerides, Earth rotation parameters (ERP) and ground station coordinates. Recent studies show that errors in the sub-daily Earth rotation models are the main factors that induce such noise. Current IGS orbit processing standards adopted the IERS 2010 convention and its sub-daily Earth rotation model. Since the IERS convention had published, recent advances in the VLBI analysis have made contributions to update the sub-daily Earth rotation models. We have compared several proposed sub-daily Earth rotation models and show the effect of using those models on orbit ephemeris, Earth rotation parameters and ground station coordinates generated by the NGS global GPS data processing strategy.
VAMPnets for deep learning of molecular kinetics.
Mardt, Andreas; Pasquali, Luca; Wu, Hao; Noé, Frank
2018-01-02
There is an increasing demand for computing the relevant structures, equilibria, and long-timescale kinetics of biomolecular processes, such as protein-drug binding, from high-throughput molecular dynamics simulations. Current methods employ transformation of simulated coordinates into structural features, dimension reduction, clustering the dimension-reduced data, and estimation of a Markov state model or related model of the interconversion rates between molecular structures. This handcrafted approach demands a substantial amount of modeling expertise, as poor decisions at any step will lead to large modeling errors. Here we employ the variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states, thus combining the whole data processing pipeline in a single end-to-end framework. Our method performs equally or better than state-of-the-art Markov modeling methods and provides easily interpretable few-state kinetic models.
The Electrical Structure of Discharges Modified by Electron Beams
NASA Astrophysics Data System (ADS)
Haas, F. A.; Braithwaite, N. St. J.
1997-10-01
Injection of an electron beam into a low pressure plasma modifies both the electrical structure and the distributions of charged particle energies. The electrical structure is investigated here in a one-dimensional model by representing the discharge as two collisionless sheaths with a monenergetic electron beam, linked by a quasi-neutral collisional region. The latter is modelled by fluid equations in which the beam current decreases with position. Since the electrodes are connected by an external conductor this implies through Kirchoff's laws that the thermal electron current must correspondingly increase with position. Given the boundary conditions and beam input at the first electrode then the rest of the system is uniquely described. The model reveals the dependence of the sheath potentials at the emitting and absorbing surfaces on the beam current. The model is relevant to externally injected beams and to electron beams originating from secondary processes on surfaces exposed to the plasma.
Examining depletion theories under conditions of within-task transfer.
Brewer, Gene A; Lau, Kevin K H; Wingert, Kimberly M; Ball, B Hunter; Blais, Chris
2017-07-01
In everyday life, mental fatigue can be detrimental across many domains including driving, learning, and working. Given the importance of understanding and accounting for the deleterious effects of mental fatigue on behavior, a growing body of literature has studied the role of motivational and executive control processes in mental fatigue. In typical laboratory paradigms, participants complete a task that places demand on these self-control processes and are later given a subsequent task. Generally speaking, decrements to subsequent task performance are taken as evidence that the initial task created mental fatigue through the continued engagement of motivational and executive functions. Several models have been developed to account for negative transfer resulting from this "ego depletion." In the current study, we provide a brief literature review, specify current theoretical approaches to ego-depletion, and report an empirical test of current models of depletion. Across 4 experiments we found minimal evidence for executive control depletion along with strong evidence for motivation mediated ego depletion. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Can we (control) Engineer the degree learning process?
NASA Astrophysics Data System (ADS)
White, A. S.; Censlive, M.; Neilsen, D.
2014-07-01
This paper investigates how control theory could be applied to learning processes in engineering education. The initial point for the analysis is White's Double Loop learning model of human automation control modified for the education process where a set of governing principals is chosen, probably by the course designer. After initial training the student decides unknowingly on a mental map or model. After observing how the real world is behaving, a strategy to achieve the governing variables is chosen and a set of actions chosen. This may not be a conscious operation, it maybe completely instinctive. These actions will cause some consequences but not until a certain time delay. The current model is compared with the work of Hollenbeck on goal setting, Nelson's model of self-regulation and that of Abdulwahed, Nagy and Blanchard at Loughborough who investigated control methods applied to the learning process.
Anatomically constrained neural network models for the categorization of facial expression
NASA Astrophysics Data System (ADS)
McMenamin, Brenton W.; Assadi, Amir H.
2004-12-01
The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.
Anatomically constrained neural network models for the categorization of facial expression
NASA Astrophysics Data System (ADS)
McMenamin, Brenton W.; Assadi, Amir H.
2005-01-01
The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.
Man's impact on the troposphere: Lectures in tropospheric chemistry
NASA Technical Reports Server (NTRS)
Levine, J. S. (Editor); Schryer, D. R. (Editor)
1978-01-01
Lectures covering a broad spectrum of current research in tropospheric chemistry with particular emphasis on the interaction of measurements, modeling, and understanding of fundamental processes are presented.
Variational estimation of process parameters in a simplified atmospheric general circulation model
NASA Astrophysics Data System (ADS)
Lv, Guokun; Koehl, Armin; Stammer, Detlef
2016-04-01
Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.
Probing leptophilic dark sectors with hadronic processes
NASA Astrophysics Data System (ADS)
D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo
2017-08-01
We study vector portal dark matter models where the mediator couples only to leptons. In spite of the lack of tree-level couplings to colored states, radiative effects generate interactions with quark fields that could give rise to a signal in current and future experiments. We identify such experimental signatures: scattering of nuclei in dark matter direct detection; resonant production of lepton-antilepton pairs at the Large Hadron Collider; and hadronic final states in dark matter indirect searches. Furthermore, radiative effects also generate an irreducible mass mixing between the vector mediator and the Z boson, severely bounded by ElectroWeak Precision Tests. We use current experimental results to put bounds on this class of models, accounting for both radiatively induced and tree-level processes. Remarkably, the former often overwhelm the latter.
Probing leptophilic dark sectors with hadronic processes
D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo
2017-05-29
We study vector portal dark matter models where the mediator couples only to leptons. In spite of the lack of tree-level couplings to colored states, radiative effects generate interactions with quark fields that could give rise to a signal in current and future experiments. We identify such experimental signatures: scattering of nuclei in dark matter direct detection; resonant production of lepton–antilepton pairs at the Large Hadron Collider; and hadronic final states in dark matter indirect searches. Furthermore, radiative effects also generate an irreducible mass mixing between the vector mediator and the Z boson, severely bounded by ElectroWeak Precision Tests. Wemore » use current experimental results to put bounds on this class of models, accounting for both radiatively induced and tree-level processes. Remarkably, the former often overwhelm the latter.« less
Spin Current Noise of the Spin Seebeck Effect and Spin Pumping
NASA Astrophysics Data System (ADS)
Matsuo, M.; Ohnuma, Y.; Kato, T.; Maekawa, S.
2018-01-01
We theoretically investigate the fluctuation of a pure spin current induced by the spin Seebeck effect and spin pumping in a normal-metal-(NM-)ferromagnet(FM) bilayer system. Starting with a simple ferromagnet-insulator-(FI-)NM interface model with both spin-conserving and non-spin-conserving processes, we derive general expressions of the spin current and the spin-current noise at the interface within second-order perturbation of the FI-NM coupling strength, and estimate them for a yttrium-iron-garnet-platinum interface. We show that the spin-current noise can be used to determine the effective spin carried by a magnon modified by the non-spin-conserving process at the interface. In addition, we show that it provides information on the effective spin of a magnon, heating at the interface under spin pumping, and spin Hall angle of the NM.
Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong
2015-02-01
Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.
NASA Astrophysics Data System (ADS)
Placko, Dominique; Bore, Thierry; Rivollet, Alain; Joubert, Pierre-Yves
2015-10-01
This paper deals with the problem of imaging defects in metallic structures through eddy current (EC) inspections, and proposes an original process for a possible tomographical crack evaluation. This process is based on a semi analytical modeling, called "distributed point source method" (DPSM) which is used to describe and equate the interactions between the implemented EC probes and the structure under test. Several steps will be successively described, illustrating the feasibility of this new imaging process dedicated to the quantitative evaluation of defects. The basic principles of this imaging process firstly consist in creating a 3D grid by meshing the volume potentially inspected by the sensor. As a result, a given number of elemental volumes (called voxels) are obtained. Secondly, the DPSM modeling is used to compute an image for all occurrences in which only one of the voxels has a different conductivity among all the other ones. The assumption consists to consider that a real defect may be truly represented by a superimposition of elemental voxels: the resulting accuracy will naturally depend on the density of space sampling. On other hand, the excitation device of the EC imager has the capability to be oriented in several directions, and driven by an excitation current at variable frequency. So, the simulation will be performed for several frequencies and directions of the eddy currents induced in the structure, which increases the signal entropy. All these results are merged in a so-called "observation matrix" containing all the probe/structure interaction configurations. This matrix is then used in an inversion scheme in order to perform the evaluation of the defect location and geometry. The modeled EC data provided by the DPSM are compared to the experimental images provided by an eddy current imager (ECI), implemented on aluminum plates containing some buried defects. In order to validate the proposed inversion process, we feed it with computed images of various acquisition configurations. Additive noise was added to the images so that they are more representative of actual EC data. In the case of simple notch type defects, for which the relative conductivity may only take two extreme values (1 or 0), a threshold was introduced on the inverted images, in a post processing step, taking advantage of a priori knowledge of the statistical properties of the restored images. This threshold allowed to enhance the image contrast and has contributed to eliminate both the residual noise and the pixels showing non-realistic values.
Han, Y J; Li, L H; Grier, A; Chen, L; Valavanis, A; Zhu, J; Freeman, J R; Isac, N; Colombelli, R; Dean, P; Davies, A G; Linfield, E H
2016-12-12
We report an extraction-controlled terahertz (THz)-frequency quantum cascade laser design in which a diagonal LO-phonon scattering process is used to achieve efficient current injection into the upper laser level of each period and simultaneously extract electrons from the adjacent period. The effects of the diagonality of the radiative transition are investigated, and a design with a scaled oscillator strength of 0.45 is shown experimentally to provide the highest temperature performance. A 3.3 THz device processed into a double-metal waveguide configuration operated up to 123 K in pulsed mode, with a threshold current density of 1.3 kA/cm2 at 10 K. The QCL structures are modeled using an extended density matrix approach, and the large threshold current is attributed to parasitic current paths associated with the upper laser levels. The simplicity of this design makes it an ideal platform to investigate the scattering injection process.
An Evaluation System for the Online Training Programs in Meteorology and Hydrology
ERIC Educational Resources Information Center
Wang, Yong; Zhi, Xiefei
2009-01-01
This paper studies the current evaluation system for the online training program in meteorology and hydrology. CIPP model that includes context evaluation, input evaluation, process evaluation and product evaluation differs from Kirkpatrick model including reactions evaluation, learning evaluation, transfer evaluation and results evaluation in…
Designing a Pedagogical Model for Web Engineering Education: An Evolutionary Perspective
ERIC Educational Resources Information Center
Hadjerrouit, Said
2005-01-01
In contrast to software engineering, which relies on relatively well established development approaches, there is a lack of a proven methodology that guides Web engineers in building reliable and effective Web-based systems. Currently, Web engineering lacks process models, architectures, suitable techniques and methods, quality assurance, and a…