Event-Driven Process Chains (EPC)
NASA Astrophysics Data System (ADS)
Mendling, Jan
This chapter provides a comprehensive overview of Event-driven Process Chains (EPCs) and introduces a novel definition of EPC semantics. EPCs became popular in the 1990s as a conceptual business process modeling language in the context of reference modeling. Reference modeling refers to the documentation of generic business operations in a model such as service processes in the telecommunications sector, for example. It is claimed that reference models can be reused and adapted as best-practice recommendations in individual companies (see [230, 168, 229, 131, 400, 401, 446, 127, 362, 126]). The roots of reference modeling can be traced back to the Kölner Integrationsmodell (KIM) [146, 147] that was developed in the 1960s and 1970s. In the 1990s, the Institute of Information Systems (IWi) in Saarbrücken worked on a project with SAP to define a suitable business process modeling language to document the processes of the SAP R/3 enterprise resource planning system. There were two results from this joint effort: the definition of EPCs [210] and the documentation of the SAP system in the SAP Reference Model (see [92, 211]). The extensive database of this reference model contains almost 10,000 sub-models: 604 of them non-trivial EPC business process models. The SAP Reference model had a huge impact with several researchers referring to it in their publications (see [473, 235, 127, 362, 281, 427, 415]) as well as motivating the creation of EPC reference models in further domains including computer integrated manufacturing [377, 379], logistics [229] or retail [52]. The wide-spread application of EPCs in business process modeling theory and practice is supported by their coverage in seminal text books for business process management and information systems in general (see [378, 380, 49, 384, 167, 240]). EPCs are frequently used in practice due to a high user acceptance [376] and extensive tool support. Some examples of tools that support EPCs are ARIS Toolset by IDS Scheer AG, AENEIS by ATOSS Software AG, ADONIS by BOC GmbH, Visio by Microsoft Corp., Nautilus by Gedilan Consulting GmbH, and Bonapart by Pikos GmbH. In order to facilitate the interchange of EPC business process models between these tools, there is a tool neutral interchange format called EPC Markup Language (EPML) [283, 285, 286, 287, 289, 290, 291].
Digital data registration and differencing compression system
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1990-01-01
A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.
Digital Data Registration and Differencing Compression System
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1996-01-01
A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.
Digital data registration and differencing compression system
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1992-01-01
A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.
Guide to solar reference spectra and irradiance models
NASA Astrophysics Data System (ADS)
Tobiska, W. Kent
The international standard for determining solar irradiances was published by the International Standards Organization (ISO) in May 2007. The document, ISO 21348 Space Environment (natural and artificial) - Process for determining solar irradiances, describes the process for representing solar irradiances. We report on the next progression of standards work, i.e., the development of a guide that identifies solar reference spectra and irradiance models for use in engineering design or scientific research. This document will be produced as an AIAA Guideline and ISO Technical Report. It will describe the content of the reference spectra and models, uncertainties and limitations, technical basis, data bases from which the reference spectra and models are formed, publication references, and sources of computer code for reference spectra and solar irradiance models, including those which provide spectrally-resolved lines as well as solar indices and proxies and which are generally recognized in the solar sciences. The document is intended to assist aircraft and space vehicle designers and developers, heliophysicists, geophysicists, aeronomers, meteorologists, and climatologists in understanding available models, comparing sources of data, and interpreting engineering and scientific results based on different solar reference spectra and irradiance models.
The Reference Encounter Model.
ERIC Educational Resources Information Center
White, Marilyn Domas
1983-01-01
Develops model of the reference interview which explicitly incorporates human information processing, particularly schema ideas presented by Marvin Minsky and other theorists in cognitive processing and artificial intelligence. Questions are raised concerning use of content analysis of transcribed verbal protocols as methodology for studying…
LinkEHR-Ed: a multi-reference model archetype editor based on formal semantics.
Maldonado, José A; Moner, David; Boscá, Diego; Fernández-Breis, Jesualdo T; Angulo, Carlos; Robles, Montserrat
2009-08-01
To develop a powerful archetype editing framework capable of handling multiple reference models and oriented towards the semantic description and standardization of legacy data. The main prerequisite for implementing tools providing enhanced support for archetypes is the clear specification of archetype semantics. We propose a formalization of the definition section of archetypes based on types over tree-structured data. It covers the specialization of archetypes, the relationship between reference models and archetypes and conformance of data instances to archetypes. LinkEHR-Ed, a visual archetype editor based on the former formalization with advanced processing capabilities that supports multiple reference models, the editing and semantic validation of archetypes, the specification of mappings to data sources, and the automatic generation of data transformation scripts, is developed. LinkEHR-Ed is a useful tool for building, processing and validating archetypes based on any reference model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. Gaffiney
2004-11-23
This report presents and documents the model components and analyses that represent potential processes associated with propagation of a magma-filled crack (dike) migrating upward toward the surface, intersection of the dike with repository drifts, flow of magma in the drifts, and post-magma emplacement effects on repository performance. The processes that describe upward migration of a dike and magma flow down the drift are referred to as the dike intrusion submodel. The post-magma emplacement processes are referred to as the post-intrusion submodel. Collectively, these submodels are referred to as a conceptual model for dike/drift interaction. The model components and analyses ofmore » the dike/drift interaction conceptual model provide the technical basis for assessing the potential impacts of an igneous intrusion on repository performance, including those features, events, and processes (FEPs) related to dike/drift interaction (Section 6.1).« less
A standard satellite control reference model
NASA Technical Reports Server (NTRS)
Golden, Constance
1994-01-01
This paper describes a Satellite Control Reference Model that provides the basis for an approach to identify where standards would be beneficial in supporting space operations functions. The background and context for the development of the model and the approach are described. A process for using this reference model to trace top level interoperability directives to specific sets of engineering interface standards that must be implemented to meet these directives is discussed. Issues in developing a 'universal' reference model are also identified.
Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models
NASA Astrophysics Data System (ADS)
Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto
In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.
Probabilistic modeling of discourse-aware sentence processing.
Dubey, Amit; Keller, Frank; Sturt, Patrick
2013-07-01
Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. Copyright © 2013 Cognitive Science Society, Inc.
Improvement of radiology services based on the process management approach.
Amaral, Creusa Sayuri Tahara; Rozenfeld, Henrique; Costa, Janaina Mascarenhas Hornos; Magon, Maria de Fátima de Andrade; Mascarenhas, Yvone Maria
2011-06-01
The health sector requires continuous investments to ensure the improvement of products and services from a technological standpoint, the use of new materials, equipment and tools, and the application of process management methods. Methods associated with the process management approach, such as the development of reference models of business processes, can provide significant innovations in the health sector and respond to the current market trend for modern management in this sector (Gunderman et al. (2008) [4]). This article proposes a process model for diagnostic medical X-ray imaging, from which it derives a primary reference model and describes how this information leads to gains in quality and improvements. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Burggraeve, A; Van den Kerkhof, T; Hellings, M; Remon, J P; Vervaet, C; De Beer, T
2011-04-18
Fluid bed granulation is a batch process, which is characterized by the processing of raw materials for a predefined period of time, consisting of a fixed spraying phase and a subsequent drying period. The present study shows the multivariate statistical modeling and control of a fluid bed granulation process based on in-line particle size distribution (PSD) measurements (using spatial filter velocimetry) combined with continuous product temperature registration using a partial least squares (PLS) approach. Via the continuous in-line monitoring of the PSD and product temperature during granulation of various reference batches, a statistical batch model was developed allowing the real-time evaluation and acceptance or rejection of future batches. Continuously monitored PSD and product temperature process data of 10 reference batches (X-data) were used to develop a reference batch PLS model, regressing the X-data versus the batch process time (Y-data). Two PLS components captured 98.8% of the variation in the X-data block. Score control charts in which the average batch trajectory and upper and lower control limits are displayed were developed. Next, these control charts were used to monitor 4 new test batches in real-time and to immediately detect any deviations from the expected batch trajectory. By real-time evaluation of new batches using the developed control charts and by computation of contribution plots of deviating process behavior at a certain time point, batch losses or reprocessing can be prevented. Immediately after batch completion, all PSD and product temperature information (i.e., a batch progress fingerprint) was used to estimate some granule properties (density and flowability) at an early stage, which can improve batch release time. Individual PLS models relating the computed scores (X) of the reference PLS model (based on the 10 reference batches) and the density, respectively, flowabililty as Y-matrix, were developed. The scores of the 4 test batches were used to examine the predictive ability of the model. Copyright © 2011 Elsevier B.V. All rights reserved.
Black, Stephanie Winkeljohn; Pössel, Patrick
2013-08-01
Adolescents who develop depression have worse interpersonal and affective experiences and are more likely to develop substance problems and/or suicidal ideation compared to adolescents who do not develop depression. This study examined the combined effects of negative self-referent information processing and rumination (i.e., brooding and reflection) on adolescent depressive symptoms. It was hypothesized that the interaction of negative self-referent information processing and brooding would significantly predict depressive symptoms, while the interaction of negative self-referent information processing and reflection would not predict depressive symptoms. Adolescents (n = 92; 13-15 years; 34.7% female) participated in a 6-month longitudinal study. Self-report instruments measured depressive symptoms and rumination; a cognitive task measured information processing. Path modelling in Amos 19.0 analyzed the data. The interaction of negative information processing and brooding significantly predicted an increase in depressive symptoms 6 months later. The interaction of negative information processing and reflection did not significantly predict depression, however, the model not meet a priori standards to accept the null hypothesis. Results suggest clinicians working with adolescents at-risk for depression should consider focusing on the reduction of brooding and negative information processing to reduce long-term depressive symptoms.
McMurray, Bob; Horst, Jessica S.; Samuelson, Larissa K.
2013-01-01
Classic approaches to word learning emphasize the problem of referential ambiguity: in any naming situation the referent of a novel word must be selected from many possible objects, properties, actions, etc. To solve this problem, researchers have posited numerous constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative model in which referent selection is an online process that is independent of long-term learning. This two timescale approach creates significant power in the developing system. We illustrate this with a dynamic associative model in which referent selection is simulated as dynamic competition between competing referents, and learning is simulated using associative (Hebbian) learning. This model can account for a range of findings including the delay in expressive vocabulary relative to receptive vocabulary, learning under high degrees of referential ambiguity using cross-situational statistics, accelerating (vocabulary explosion) and decelerating (power-law) learning rates, fast-mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between individual differences in speed of processing and learning. Five theoretical points are illustrated. 1) Word learning does not require specialized processes – general association learning buttressed by dynamic competition can account for much of the literature. 2) The processes of recognizing familiar words are not different than those that support novel words (e.g., fast-mapping). 3) Online competition may allow the network (or child) to leverage information available in the task to augment performance or behavior despite what might be relatively slow learning or poor representations. 4) Even associative learning is more complex than previously thought – a major contributor to performance is the pruning of incorrect associations between words and referents. 5) Finally, the model illustrates that learning and referent selection/word recognition, though logically distinct, can be deeply and subtly related as phenomena like speed of processing and mutual exclusivity may derive in part from the way learning shapes the system. As a whole, this suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development and processing in children. PMID:23088341
Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian
2018-02-01
This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Chapter 4 - The LANDFIRE Prototype Project reference database
John F. Caratti
2006-01-01
This chapter describes the data compilation process for the Landscape Fire and Resource Management Planning Tools Prototype Project (LANDFIRE Prototype Project) reference database (LFRDB) and explains the reference data applications for LANDFIRE Prototype maps and models. The reference database formed the foundation for all LANDFIRE tasks. All products generated by the...
2017-06-01
This research expands the modeling and simulation (M and S) body of knowledge through the development of an Implicit Model Development Process (IMDP...When augmented to traditional Model Development Processes (MDP), the IMDP enables the development of models that can address a broader array of...where a broader, more holistic approach of defining a models referent is achieved. Next, the IMDP codifies the process for implementing the improved model
ERIC Educational Resources Information Center
Parker, Philip D.; Marsh, Herbert W.; Ludtke, Oliver; Trautwein, Ulrich
2013-01-01
The internal/external frame of reference and the big-fish-little-pond effect are two major models of academic self-concept formation which have considerable theoretical and empirical support. Integrating the domain specific and compensatory processes of the internal/external frame of reference model with the big-fish-little-pond effect suggests a…
ERIC Educational Resources Information Center
Brocher, Andreas; Chiriacescu, Sofiana Iulia; von Heusinger, Klaus
2018-01-01
In discourse processing, speakers collaborate toward a shared mental model by establishing and recruiting prominence relations between different discourse referents. In this article we investigate to what extent the possibility to infer a referent's existence from preceding context (as indicated by the referent's information status as inferred or…
A Practical Approach to Governance and Optimization of Structured Data Elements.
Collins, Sarah A; Gesner, Emily; Morgan, Steven; Mar, Perry; Maviglia, Saverio; Colburn, Doreen; Tierney, Diana; Rocha, Roberto
2015-01-01
Definition and configuration of clinical content in an enterprise-wide electronic health record (EHR) implementation is highly complex. Sharing of data definitions across applications within an EHR implementation project may be constrained by practical limitations, including time, tools, and expertise. However, maintaining rigor in an approach to data governance is important for sustainability and consistency. With this understanding, we have defined a practical approach for governance of structured data elements to optimize data definitions given limited resources. This approach includes a 10 step process: 1) identification of clinical topics, 2) creation of draft reference models for clinical topics, 3) scoring of downstream data needs for clinical topics, 4) prioritization of clinical topics, 5) validation of reference models for clinical topics, and 6) calculation of gap analyses of EHR compared against reference model, 7) communication of validated reference models across project members, 8) requested revisions to EHR based on gap analysis, 9) evaluation of usage of reference models across project, and 10) Monitoring for new evidence requiring revisions to reference model.
Requirements Management for Net-Centric Enterprises. Phase 1
2011-04-28
These include Business Process Modeling Notation ( BPMN ) (White and Miers 2008) and Business Process Execution Language (BPEL) (Sarang, Juric et al...UML: Modeling, Analysis, Design, Morgan Kaufmann/The OMG Press. White, S. A. and D. Miers (2008). BPMN Modeling and Reference Guide, Future
10 CFR 434.517 - HVAC systems and equipment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... simulation, except that excess capacity provided to meet process loads need not be modeled unless the process... Reference Buildings. The zones in the simulation shall correspond to the zones provided by the controls in... simulation. Table 517.4.1—HVAC System Description for Prototype and Reference Buildings 1,2 HVAC component...
Validating EHR documents: automatic schematron generation using archetypes.
Pfeiffer, Klaus; Duftschmid, Georg; Rinner, Christoph
2014-01-01
The goal of this study was to examine whether Schematron schemas can be generated from archetypes. The openEHR Java reference API was used to transform an archetype into an object model, which was then extended with context elements. The model was processed and the constraints were transformed into corresponding Schematron assertions. A prototype of the generator for the reference model HL7 v3 CDA R2 was developed and successfully tested. Preconditions for its reusability with other reference models were set. Our results indicate that an automated generation of Schematron schemas is possible with some limitations.
ERIC Educational Resources Information Center
Medwetsky, Larry
2011-01-01
Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…
Making It Work for Everyone: An Evolving Reference Service.
Feldman, Jonquil D; Lopez, Emme; Gaspard, Christine S; Barton, Karen D; Barcenes, Luis F
2018-01-01
At an academic health science center, librarians identified problems, weaknesses, and strengths in reference services. The on-call reference schedule was discontinued and a question flowchart was developed for circulation staff. Only research questions were referred to librarians, who would respond if available. Circulation staff perceived the unscheduled, voluntary model was not working well for the patrons or the staff. After two months, the schedule was reinstated with a hybrid version of the previous on-call format. In the process of changing the service model, the library staff also underwent a cultural change.
Model reference adaptive control of flexible robots in the presence of sudden load changes
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory
1991-01-01
Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition, such MRAC procedures are designed so that a feedforward augmented output follows the reference model output, thus, resulting in an ultimately bounded rather than zero output error. Thus, modifications are suggested and tested that: (1) incorporate feedforward into the reference model's output as well as the plant's output, and (2) incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error, and thus encourage further use of MRAC for more complex flexibile robotic systems.
RT-25: Requirements Management for Net-Centric Enterprises. Phase 1
2011-04-28
software systems. These include Business Process Modeling Notation ( BPMN ) (White and Miers 2008) and Business Process Execution Language (BPEL) (Sarang...Engineering with SysML/UML: Modeling, Analysis, Design, Morgan Kaufmann/The OMG Press. White, S. A. and D. Miers (2008). BPMN Modeling and Reference
The Extended HANDS Characterization and Analysis of Metric Biases
NASA Astrophysics Data System (ADS)
Kelecy, T.; Knox, R.; Cognion, R.
The Extended High Accuracy Network Determination System (Extended HANDS) consists of a network of low cost, high accuracy optical telescopes designed to support space surveillance and development of space object characterization technologies. Comprising off-the-shelf components, the telescopes are designed to provide sub arc-second astrometric accuracy. The design and analysis team are in the process of characterizing the system through development of an error allocation tree whose assessment is supported by simulation, data analysis, and calibration tests. The metric calibration process has revealed 1-2 arc-second biases in the right ascension and declination measurements of reference satellite position, and these have been observed to have fairly distinct characteristics that appear to have some dependence on orbit geometry and tracking rates. The work presented here outlines error models developed to aid in development of the system error budget, and examines characteristic errors (biases, time dependence, etc.) that might be present in each of the relevant system elements used in the data collection and processing, including the metric calibration processing. The relevant reference frames are identified, and include the sensor (CCD camera) reference frame, Earth-fixed topocentric frame, topocentric inertial reference frame, and the geocentric inertial reference frame. The errors modeled in each of these reference frames, when mapped into the topocentric inertial measurement frame, reveal how errors might manifest themselves through the calibration process. The error analysis results that are presented use satellite-sensor geometries taken from periods where actual measurements were collected, and reveal how modeled errors manifest themselves over those specific time periods. These results are compared to the real calibration metric data (right ascension and declination residuals), and sources of the bias are hypothesized. In turn, the actual right ascension and declination calibration residuals are also mapped to other relevant reference frames in an attempt to validate the source of the bias errors. These results will serve as the basis for more focused investigation into specific components embedded in the system and system processes that might contain the source of the observed biases.
Syntax "and" Semantics: A Teaching Model.
ERIC Educational Resources Information Center
Wolfe, Frank
In translating perception into written language, a child must learn an encoding process which is a continuation of the process of improving sensing of the world around him or her. To verbalize an object (a perception) we use frames which name a referent, locate the referent in space and time, identify its appearance and behavior, and define terms…
Cognitive Modeling of Individual Variation in Reference Production and Comprehension
Hendriks, Petra
2016-01-01
A challenge for most theoretical and computational accounts of linguistic reference is the observation that language users vary considerably in their referential choices. Part of the variation observed among and within language users and across tasks may be explained from variation in the cognitive resources available to speakers and listeners. This paper presents a computational model of reference production and comprehension developed within the cognitive architecture ACT-R. Through simulations with this ACT-R model, it is investigated how cognitive constraints interact with linguistic constraints and features of the linguistic discourse in speakers’ production and listeners’ comprehension of referring expressions in specific tasks, and how this interaction may give rise to variation in referential choice. The ACT-R model of reference explains and predicts variation among language users in their referential choices as a result of individual and task-related differences in processing speed and working memory capacity. Because of limitations in their cognitive capacities, speakers sometimes underspecify or overspecify their referring expressions, and listeners sometimes choose incorrect referents or are overly liberal in their interpretation of referring expressions. PMID:27092101
NASA Technical Reports Server (NTRS)
Kopasakis, George
1997-01-01
Performance Seeking Control (PSC) attempts to find and control the process at the operating condition that will generate maximum performance. In this paper a nonlinear multivariable PSC methodology will be developed, utilizing the Fuzzy Model Reference Learning Control (FMRLC) and the method of Steepest Descent or Gradient (SDG). This PSC control methodology employs the SDG method to find the operating condition that will generate maximum performance. This operating condition is in turn passed to the FMRLC controller as a set point for the control of the process. The conventional SDG algorithm is modified in this paper in order for convergence to occur monotonically. For the FMRLC control, the conventional fuzzy model reference learning control methodology is utilized, with guidelines generated here for effective tuning of the FMRLC controller.
Optimal Estimation with Two Process Models and No Measurements
2015-08-01
models will be lost if either of the models includes deterministic modeling errors. 12 5. References and Notes 1. Brown RG, Hwang PYC. Introduction to...independent process models when no measurements are present. The observer follows a derivation similar to that of the discrete time Kalman filter. A simulation...discrete time Kalman filter. A simulation example is provided in which a process model based on the dynamics of a ballistic projectile is blended with an
Quality assessment for color reproduction using a blind metric
NASA Astrophysics Data System (ADS)
Bringier, B.; Quintard, L.; Larabi, M.-C.
2007-01-01
This paper deals with image quality assessment. This field plays nowadays an important role in various image processing applications. Number of objective image quality metrics, that correlate or not, with the subjective quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced to an image with regards to the reference. No-reference approach attempts to model the judgment of image quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical models established on psychophysical experimentation are generally used. In this paper, we focus only on the second category to evaluate the quality of color reproduction where a blind metric, based on human visual system modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.
On-line identification of fermentation processes for ethanol production.
Câmara, M M; Soares, R M; Feital, T; Naomi, P; Oki, S; Thevelein, J M; Amaral, M; Pinto, J C
2017-07-01
A strategy for monitoring fermentation processes, specifically, simultaneous saccharification and fermentation (SSF) of corn mash, was developed. The strategy covered the development and use of first principles, semimechanistic and unstructured process model based on major kinetic phenomena, along with mass and energy balances. The model was then used as a reference model within an identification procedure capable of running on-line. The on-line identification procedure consists on updating the reference model through the estimation of corrective parameters for certain reaction rates using the most recent process measurements. The strategy makes use of standard laboratory measurements for sugars quantification and in situ temperature and liquid level data. The model, along with the on-line identification procedure, has been tested against real industrial data and have been able to accurately predict the main variables of operational interest, i.e., state variables and its dynamics, and key process indicators. The results demonstrate that the strategy is capable of monitoring, in real time, this complex industrial biomass fermentation. This new tool provides a great support for decision-making and opens a new range of opportunities for industrial optimization.
McMurray, Bob; Horst, Jessica S; Samuelson, Larissa K
2012-10-01
Classic approaches to word learning emphasize referential ambiguity: In naming situations, a novel word could refer to many possible objects, properties, actions, and so forth. To solve this, researchers have posited constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative in which referent selection is an online process and independent of long-term learning. We illustrate this theoretical approach with a dynamic associative model in which referent selection emerges from real-time competition between referents and learning is associative (Hebbian). This model accounts for a range of findings including the differences in expressive and receptive vocabulary, cross-situational learning under high degrees of ambiguity, accelerating (vocabulary explosion) and decelerating (power law) learning, fast mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between speed of processing and learning. Together it suggests that (a) association learning buttressed by dynamic competition can account for much of the literature; (b) familiar word recognition is subserved by the same processes that identify the referents of novel words (fast mapping); (c) online competition may allow the children to leverage information available in the task to augment performance despite slow learning; (d) in complex systems, associative learning is highly multifaceted; and (e) learning and referent selection, though logically distinct, can be subtly related. It suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development. PsycINFO Database Record (c) 2012 APA, all rights reserved.
The On-Line Investigation of Reading a Text: Methods and a Model.
ERIC Educational Resources Information Center
Hyona, Jukka
Five methods for studying the process of reading a text are presented, and a model for discourse processing is outlined. Discourse processing refers to comprehension of the meaning underlying the verbal message. The methods discussed here investigate the reading process as it occurs, and focus on the amount of time taken to complete a task or…
Toward Modeling the Learner's Personality Using Educational Games
ERIC Educational Resources Information Center
Essalmi, Fathi; Tlili, Ahmed; Ben Ayed, Leila Jemni; Jemmi, Mohamed
2017-01-01
Learner modeling is a crucial step in the learning personalization process. It allows taking into consideration the learner's profile to make the learning process more efficient. Most studies refer to an explicit method, namely questionnaire, to model learners. Questionnaires are time consuming and may not be motivating for learners. Thus, this…
Definition and Proposed Realization of the International Height Reference System (IHRS)
NASA Astrophysics Data System (ADS)
Ihde, Johannes; Sánchez, Laura; Barzaghi, Riccardo; Drewes, Hermann; Foerste, Christoph; Gruber, Thomas; Liebsch, Gunter; Marti, Urs; Pail, Roland; Sideris, Michael
2017-05-01
Studying, understanding and modelling global change require geodetic reference frames with an order of accuracy higher than the magnitude of the effects to be actually studied and with high consistency and reliability worldwide. The International Association of Geodesy, taking care of providing a precise geodetic infrastructure for monitoring the Earth system, promotes the implementation of an integrated global geodetic reference frame that provides a reliable frame for consistent analysis and modelling of global phenomena and processes affecting the Earth's gravity field, the Earth's surface geometry and the Earth's rotation. The definition, realization, maintenance and wide utilization of the International Terrestrial Reference System guarantee a globally unified geometric reference frame with an accuracy at the millimetre level. An equivalent high-precision global physical reference frame that supports the reliable description of changes in the Earth's gravity field (such as sea level variations, mass displacements, processes associated with geophysical fluids) is missing. This paper addresses the theoretical foundations supporting the implementation of such a physical reference surface in terms of an International Height Reference System and provides guidance for the coming activities required for the practical and sustainable realization of this system. Based on conceptual approaches of physical geodesy, the requirements for a unified global height reference system are derived. In accordance with the practice, its realization as the International Height Reference Frame is designed. Further steps for the implementation are also proposed.
A Neurobehavioral Model of Flexible Spatial Language Behaviors
Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schöner, Gregor
2012-01-01
We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes. PMID:21517224
CTF (Subchannel) Calculations and Validation L3:VVI.H2L.P15.01
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, Natalie
The goal of the Verification and Validation Implementation (VVI) High to Low (Hi2Lo) process is utilizing a validated model in a high resolution code to generate synthetic data for improvement of the same model in a lower resolution code. This process is useful in circumstances where experimental data does not exist or it is not sufficient in quantity or resolution. Data from the high-fidelity code is treated as calibration data (with appropriate uncertainties and error bounds) which can be used to train parameters that affect solution accuracy in the lower-fidelity code model, thereby reducing uncertainty. This milestone presents a demonstrationmore » of the Hi2Lo process derived in the VVI focus area. The majority of the work performed herein describes the steps of the low-fidelity code used in the process with references to the work detailed in the companion high-fidelity code milestone (Reference 1). The CASL low-fidelity code used to perform this work was Cobra Thermal Fluid (CTF) and the high-fidelity code was STAR-CCM+ (STAR). The master branch version of CTF (pulled May 5, 2017 – Reference 2) was utilized for all CTF analyses performed as part of this milestone. The statistical and VVUQ components of the Hi2Lo framework were performed using Dakota version 6.6 (release date May 15, 2017 – Reference 3). Experimental data from Westinghouse Electric Company (WEC – Reference 4) was used throughout the demonstrated process to compare with the high-fidelity STAR results. A CTF parameter called Beta was chosen as the calibration parameter for this work. By default, Beta is defined as a constant mixing coefficient in CTF and is essentially a tuning parameter for mixing between subchannels. Since CTF does not have turbulence models like STAR, Beta is the parameter that performs the most similar function to the turbulence models in STAR. The purpose of the work performed in this milestone is to tune Beta to an optimal value that brings the CTF results closer to those measured in the WEC experiments.« less
Building a Unified Information Network.
ERIC Educational Resources Information Center
Avram, Henriette D.
1988-01-01
Discusses cooperative efforts between research organizations and libraries to create a national information network. Topics discussed include the Linked System Project (LSP); technical processing versus reference and research functions; Open Systems Interconnection (OSI) Reference Model; the National Science Foundation Network (NSFNET); and…
Selection of reference standard during method development using the analytical hierarchy process.
Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun
2015-03-25
Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Jian; Meng, Xiaohong; Zheng, Wanqiu
2017-10-01
The elastic-wave reverse-time migration of inhomogeneous anisotropic media is becoming the hotspot of research today. In order to ensure the accuracy of the migration, it is necessary to separate the wave mode into P-wave and S-wave before migration. For inhomogeneous media, the Kelvin-Christoffel equation can be solved in the wave-number domain by using the anisotropic parameters of the mesh nodes, and the polarization vector of the P-wave and S-wave at each node can be calculated and transformed into the space domain to obtain the quasi-differential operators. However, this method is computationally expensive, especially for the process of quasi-differential operators. In order to reduce the computational complexity, the wave-mode separation of mixed domain can be realized on the basis of a reference model in the wave-number domain. But conventional interpolation methods and reference model selection methods reduce the separation accuracy. In order to further improve the separation effect, this paper introduces an inverse-distance interpolation method involving position shading and uses the reference model selection method of random points scheme. This method adds the spatial weight coefficient K, which reflects the orientation of the reference point on the conventional IDW algorithm, and the interpolation process takes into account the combined effects of the distance and azimuth of the reference points. Numerical simulation shows that the proposed method can separate the wave mode more accurately using fewer reference models and has better practical value.
Evolution of an Implementation-Ready Interprofessional Pain Assessment Reference Model
Collins, Sarah A; Bavuso, Karen; Swenson, Mary; Suchecki, Christine; Mar, Perry; Rocha, Roberto A.
2017-01-01
Standards to increase consistency of comprehensive pain assessments are important for safety, quality, and analytics activities, including meeting Joint Commission requirements and learning the best management strategies and interventions for the current prescription Opioid epidemic. In this study we describe the development and validation of a Pain Assessment Reference Model ready for implementation on EHR forms and flowsheets. Our process resulted in 5 successive revisions of the reference model, which more than doubled the number of data elements to 47. The organization of the model evolved during validation sessions with panels totaling 48 subject matter experts (SMEs) to include 9 sets of data elements, with one set recommended as a minimal data set. The reference model also evolved when implemented into EHR forms and flowsheets, indicating specifications such as cascading logic that are important to inform secondary use of data. PMID:29854125
Almahayni, T
2014-12-01
The BIOMASS methodology was developed with the objective of constructing defensible assessment biospheres for assessing potential radiological impacts of radioactive waste repositories. To this end, a set of Example Reference Biospheres were developed to demonstrate the use of the methodology and to provide an international point of reference. In this paper, the performance of the Example Reference Biosphere model ERB 2B associated with the natural release scenario, discharge of contaminated groundwater to the surface environment, was evaluated by comparing its long-term projections of radionuclide dynamics and distribution in a soil-plant system to those of a process-based, transient advection-dispersion model (AD). The models were parametrised with data characteristic of a typical rainfed winter wheat crop grown on a sandy loam soil under temperate climate conditions. Three safety-relevant radionuclides, (99)Tc, (129)I and (237)Np with different degree of sorption were selected for the study. Although the models were driven by the same hydraulic (soil moisture content and water fluxes) and radiological (Kds) input data, their projections were remarkably different. On one hand, both models were able to capture short and long-term variation in activity concentration in the subsoil compartment. On the other hand, the Reference Biosphere model did not project any radionuclide accumulation in the topsoil and crop compartments. This behaviour would underestimate the radiological exposure under natural release scenarios. The results highlight the potential role deep roots play in soil-to-plant transfer under a natural release scenario where radionuclides are released into the subsoil. When considering the relative activity and root depth profiles within the soil column, much of the radioactivity was taken up into the crop from the subsoil compartment. Further improvements were suggested to address the limitations of the Reference Biosphere model presented in this paper. Copyright © 2014 Elsevier Ltd. All rights reserved.
Application of time-variable process noise in terrestrial reference frames determined from VLBI data
NASA Astrophysics Data System (ADS)
Soja, Benedikt; Gross, Richard S.; Abbondanza, Claudio; Chin, Toshio M.; Heflin, Michael B.; Parker, Jay W.; Wu, Xiaoping; Balidakis, Kyriakos; Nilsson, Tobias; Glaser, Susanne; Karbon, Maria; Heinkelmann, Robert; Schuh, Harald
2018-05-01
In recent years, Kalman filtering has emerged as a suitable technique to determine terrestrial reference frames (TRFs), a prime example being JTRF2014. The time series approach allows variations of station coordinates that are neither reduced by observational corrections nor considered in the functional model to be taken into account. These variations are primarily due to non-tidal geophysical loading effects that are not reduced according to the current IERS Conventions (2010). It is standard practice that the process noise models applied in Kalman filter TRF solutions are derived from time series of loading displacements and account for station dependent differences. So far, it has been assumed that the parameters of these process noise models are constant over time. However, due to the presence of seasonal and irregular variations, this assumption does not truly reflect reality. In this study, we derive a station coordinate process noise model allowing for such temporal variations. This process noise model and one that is a parameterized version of the former are applied in the computation of TRF solutions based on very long baseline interferometry data. In comparison with a solution based on a constant process noise model, we find that the station coordinates are affected at the millimeter level.
A multi-site cognitive task analysis for biomedical query mediation.
Hruby, Gregory W; Rasmussen, Luke V; Hanauer, David; Patel, Vimla L; Cimino, James J; Weng, Chunhua
2016-09-01
To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: "Identify potential index phenotype," "If needed, request EHR database access rights," and "Perform query and present output to medical researcher", and 8 are invalid. We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A Multi-Site Cognitive Task Analysis for Biomedical Query Mediation
Hruby, Gregory W.; Rasmussen, Luke V.; Hanauer, David; Patel, Vimla; Cimino, James J.; Weng, Chunhua
2016-01-01
Objective To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. Materials and Methods We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. Results The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: “Identify potential index phenotype,” “If needed, request EHR database access rights,” and “Perform query and present output to medical researcher”, and 8 are invalid. Discussion We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. Conclusions We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. PMID:27435950
Generic Argillite/Shale Disposal Reference Case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Liange; Colon, Carlos Jové; Bianchi, Marco
Radioactive waste disposal in a deep subsurface repository hosted in clay/shale/argillite is a subject of widespread interest given the desirable isolation properties, geochemically reduced conditions, and widespread geologic occurrence of this rock type (Hansen 2010; Bianchi et al. 2013). Bianchi et al. (2013) provides a description of diffusion in a clay-hosted repository based on single-phase flow and full saturation using parametric data from documented studies in Europe (e.g., ANDRA 2005). The predominance of diffusive transport and sorption phenomena in this clay media are key attributes to impede radionuclide mobility making clay rock formations target sites for disposal of high-level radioactivemore » waste. The reports by Hansen et al. (2010) and those from numerous studies in clay-hosted underground research laboratories (URLs) in Belgium, France and Switzerland outline the extensive scientific knowledge obtained to assess long-term clay/shale/argillite repository isolation performance of nuclear waste. In the past several years under the UFDC, various kinds of models have been developed for argillite repository to demonstrate the model capability, understand the spatial and temporal alteration of the repository, and evaluate different scenarios. These models include the coupled Thermal-Hydrological-Mechanical (THM) and Thermal-Hydrological-Mechanical-Chemical (THMC) models (e.g. Liu et al. 2013; Rutqvist et al. 2014a, Zheng et al. 2014a) that focus on THMC processes in the Engineered Barrier System (EBS) bentonite and argillite host hock, the large scale hydrogeologic model (Bianchi et al. 2014) that investigates the hydraulic connection between an emplacement drift and surrounding hydrogeological units, and Disposal Systems Evaluation Framework (DSEF) models (Greenberg et al. 2013) that evaluate thermal evolution in the host rock approximated as a thermal conduction process to facilitate the analysis of design options. However, the assumptions and the properties (parameters) used in these models are different, which not only make inter-model comparisons difficult, but also compromise the applicability of the lessons learned from one model to another model. The establishment of a reference case would therefore be helpful to set up a baseline for model development. A generic salt repository reference case was developed in Freeze et al. (2013) and the generic argillite repository reference case is presented in this report. The definition of a reference case requires the characterization of the waste inventory, waste form, waste package, repository layout, EBS backfill, host rock, and biosphere. This report mainly documents the processes in EBS bentonite and host rock that are potentially important for performance assessment and properties that are needed to describe these processes, with brief description other components such as waste inventory, waste form, waste package, repository layout, aquifer, and biosphere. A thorough description of the generic argillite repository reference case will be given in Jové Colon et al. (2014).« less
The Arrhenius equation revisited.
Peleg, Micha; Normand, Mark D; Corradini, Maria G
2012-01-01
The Arrhenius equation has been widely used as a model of the temperature effect on the rate of chemical reactions and biological processes in foods. Since the model requires that the rate increase monotonically with temperature, its applicability to enzymatic reactions and microbial growth, which have optimal temperature, is obviously limited. This is also true for microbial inactivation and chemical reactions that only start at an elevated temperature, and for complex processes and reactions that do not follow fixed order kinetics, that is, where the isothermal rate constant, however defined, is a function of both temperature and time. The linearity of the Arrhenius plot, that is, Ln[k(T)] vs. 1/T where T is in °K has been traditionally considered evidence of the model's validity. Consequently, the slope of the plot has been used to calculate the reaction or processes' "energy of activation," usually without independent verification. Many experimental and simulated rate constant vs. temperature relationships that yield linear Arrhenius plots can also be described by the simpler exponential model Ln[k(T)/k(T(reference))] = c(T-T(reference)). The use of the exponential model or similar empirical alternative would eliminate the confusing temperature axis inversion, the unnecessary compression of the temperature scale, and the need for kinetic assumptions that are hard to affirm in food systems. It would also eliminate the reference to the Universal gas constant in systems where a "mole" cannot be clearly identified. Unless proven otherwise by independent experiments, one cannot dismiss the notion that the apparent linearity of the Arrhenius plot in many food systems is due to a mathematical property of the model's equation rather than to the existence of a temperature independent "energy of activation." If T+273.16°C in the Arrhenius model's equation is replaced by T+b, where the numerical value of the arbitrary constant b is substantially larger than T and T(reference), the plot of Ln k(T) vs. 1/(T+b) will always appear almost perfectly linear. Both the modified Arrhenius model version having the arbitrary constant b, Ln[k(T)/k(T(reference)) = a[1/ (T(reference)+b)-1/ (T+b)], and the exponential model can faithfully describe temperature dependencies traditionally described by the Arrhenius equation without the assumption of a temperature independent "energy of activation." This is demonstrated mathematically and with computer simulations, and with reprocessed classical kinetic data and published food results.
Dainer-Best, Justin; Disner, Seth G; McGeary, John E; Hamilton, Bethany J; Beevers, Christopher G
2018-01-01
The current research examined whether carriers of the short 5-HTTLPR allele (in SLC6A4), who have been shown to selectively attend to negative information, exhibit a bias towards negative self-referent processing. The self-referent encoding task (SRET) was used to measure self-referential processing of positive and negative adjectives. Ratcliff's diffusion model isolated and extracted decision-making components from SRET responses and reaction times. Across the initial (N = 183) and replication (N = 137) studies, results indicated that short 5-HTTLPR allele carriers more easily categorized negative adjectives as self-referential (i.e., higher drift rate). Further, drift rate was associated with recall of negative self-referential stimuli. Findings across both studies provide further evidence that genetic variation may contribute to the etiology of negatively biased processing of self-referent information. Large scale studies examining the genetic contributions to negative self-referent processing may be warranted.
Plasma Processes for Semiconductor Fabrication
NASA Astrophysics Data System (ADS)
Hitchon, W. N. G.
1999-01-01
Plasma processing is a central technique in the fabrication of semiconductor devices. This self-contained book provides an up-to-date description of plasma etching and deposition in semiconductor fabrication. It presents the basic physics and chemistry of these processes, and shows how they can be accurately modeled. The author begins with an overview of plasma reactors and discusses the various models for understanding plasma processes. He then covers plasma chemistry, addressing the effects of different chemicals on the features being etched. Having presented the relevant background material, he then describes in detail the modeling of complex plasma systems, with reference to experimental results. The book closes with a useful glossary of technical terms. No prior knowledge of plasma physics is assumed in the book. It contains many homework exercises and serves as an ideal introduction to plasma processing and technology for graduate students of electrical engineering and materials science. It will also be a useful reference for practicing engineers in the semiconductor industry.
Abstraction Techniques for Parameterized Verification
2006-11-01
approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite
Reference analysis of the signal + background model in counting experiments
NASA Astrophysics Data System (ADS)
Casadei, D.
2012-01-01
The model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered from a Bayesian point of view. This is a widely used model for the searches of rare or exotic events in presence of a background source, as for example in the searches performed by high-energy physics experiments. In the assumption of prior knowledge about the background yield, a reference prior is obtained for the signal alone and its properties are studied. Finally, the properties of the full solution, the marginal reference posterior, are illustrated with few examples.
Li, Kaiyue; Wang, Weiying; Liu, Yanping; Jiang, Su; Huang, Guo; Ye, Liming
2017-01-01
The active ingredients and thus pharmacological efficacy of traditional Chinese medicine (TCM) at different degrees of parching process vary greatly. Near-infrared spectroscopy (NIR) was used to develop a new method for rapid online analysis of TCM parching process, using two kinds of chemical indicators (5-(hydroxymethyl) furfural [5-HMF] content and 420 nm absorbance) as reference values which were obviously observed and changed in most TCM parching process. Three representative TCMs, Areca ( Areca catechu L.), Malt ( Hordeum Vulgare L.), and Hawthorn ( Crataegus pinnatifida Bge.), were used in this study. With partial least squares regression, calibration models of NIR were generated based on two kinds of reference values, i.e. 5-HMF contents measured by high-performance liquid chromatography (HPLC) and 420 nm absorbance measured by ultraviolet-visible spectroscopy (UV/Vis), respectively. In the optimized models for 5-HMF, the root mean square errors of prediction (RMSEP) for Areca, Malt, and Hawthorn was 0.0192, 0.0301, and 0.2600 and correlation coefficients ( R cal ) were 99.86%, 99.88%, and 99.88%, respectively. Moreover, in the optimized models using 420 nm absorbance as reference values, the RMSEP for Areca, Malt, and Hawthorn was 0.0229, 0.0096, and 0.0409 and R cal were 99.69%, 99.81%, and 99.62%, respectively. NIR models with 5-HMF content and 420 nm absorbance as reference values can rapidly and effectively identify three kinds of TCM in different parching processes. This method has great promise to replace current subjective color judgment and time-consuming HPLC or UV/Vis methods and is suitable for rapid online analysis and quality control in TCM industrial manufacturing process. Near-infrared spectroscopy.(NIR) was used to develop a new method for online analysis of traditional Chinese medicine.(TCM) parching processCalibration and validation models of Areca, Malt, and Hawthorn were generated by partial least squares regression using 5.(hydroxymethyl) furfural contents and 420.nm absorbance as reference values, respectively, which were main indicator components during parching process of most TCMThe established NIR models of three TCMs had low root mean square errors of prediction and high correlation coefficientsThe NIR method has great promise for use in TCM industrial manufacturing processes for rapid online analysis and quality control. Abbreviations used: NIR: Near-infrared Spectroscopy; TCM: Traditional Chinese medicine; Areca: Areca catechu L.; Hawthorn: Crataegus pinnatifida Bge.; Malt: Hordeum vulgare L.; 5-HMF: 5-(hydroxymethyl) furfural; PLS: Partial least squares; D: Dimension faction; SLS: Straight line subtraction, MSC: Multiplicative scatter correction; VN: Vector normalization; RMSECV: Root mean square errors of cross-validation; RMSEP: Root mean square errors of validation; R cal : Correlation coefficients; RPD: Residual predictive deviation; PAT: Process analytical technology; FDA: Food and Drug Administration; ICH: International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use.
A modified Galam’s model for word-of-mouth information exchange
NASA Astrophysics Data System (ADS)
Ellero, Andrea; Fasano, Giovanni; Sorato, Annamaria
2009-09-01
In this paper we analyze the stochastic model proposed by Galam in [S. Galam, Modelling rumors: The no plane Pentagon French hoax case, Physica A 320 (2003), 571-580], for information spreading in a ‘word-of-mouth’ process among agents, based on a majority rule. Using the communications rules among agents defined in the above reference, we first perform simulations of the ‘word-of-mouth’ process and compare the results with the theoretical values predicted by Galam’s model. Some dissimilarities arise in particular when a small number of agents is considered. We find motivations for these dissimilarities and suggest some enhancements by introducing a new parameter dependent model. We propose a modified Galam’s scheme which is asymptotically coincident with the original model in the above reference. Furthermore, for relatively small values of the parameter, we provide a numerical experience proving that the modified model often outperforms the original one.
Dealing with the archetypes development process for a regional EHR system.
Santos, M R; Bax, M P; Kalra, D
2012-01-01
This paper aims to present the archetype modelling process used for the Health Department of Minas Gerais State, Brazil (SES/MG), to support building its regional EHR system, and the lessons learned during this process. This study was undertaken within the Minas Gerais project. The EHR system architecture was built assuming the reference model from the ISO 13606 norm. The whole archetype development process took about ten months, coordinated by a clinical team co-ordinated by three health professionals and one systems analyst from the SES/MG. They were supported by around 30 health professionals from the internal SES/MG areas, and 5 systems analysts from the PRODEMGE. Based on a bottom-up approach, the project team used technical interviews and brainstorming sessions to conduct the modelling process. The main steps of the archetype modelling process were identified and described, and 20 archetypes were created. -The set of principles established during the selection of PCS elements helped the clinical team to keep the focus in their objectives;-The initial focus on the archetype structural organization aspects was important;-The data elements identified were subjected to a rigorous analysis aimed at determining the most suitable clinical domain;-Levelling the concepts to accommodate them within the hierarchical levels in the reference model was definitely no easy task, and the use of a mind mapping tool facilitated the modelling process;-Part of the difficulty experienced by the clinical team was related to a view focused on the original forms previously used;-The use of worksheets facilitated the modelling process by health professionals;-It was important to have a health professional that knew about the domain tables and health classifications from the Brazilian Federal Government as member in the clinical team. The archetypes (referencing terminology, domain tables and term lists) provided a favorable condition for the use of a controlled vocabulary between the central repository and the EMR systems and, probably, will increase the chances of preserving the semantics from the knowledge domain. Finally, the reference model from the ISO 13606 norm, along with the archetypes, proved sufficient to meet the specificities for the creation of an EHR system for basic healthcare in a Brazilian state.
ERIC Educational Resources Information Center
Allen, Deborah; Tanner, Kimberly
2007-01-01
This article discusses a systematic approach to designing significant learning experiences, often referred to as the "backward design process," which has been popularized by Wiggins and McTighe (1998) and is included as a central feature of L. Dee Fink's model for integrated course design (Fink, 2003). The process is referred to as backward…
Assessing Performance Tradeoffs in Undersea Distributed Sensor Networks
2006-09-01
time. We refer to this process as track - before - detect (see [5] for a description), since the final determination of a target presence is not made until...expressions for probability of successful search and probability of false search for modeling the track - before - detect process. We then describe a numerical...random manner (randomly sampled from a uniform distribution). II. SENSOR NETWORK PERFORMANCE MODELS We model the process of track - before - detect by
NASA Astrophysics Data System (ADS)
Tian, D.; Medina, H.
2017-12-01
Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.
2002-07-29
suggestions, and guidance concerning the technology assessment process. References 1. Using ACEIT for Total Ownership Cost Modeling and Analysis...2001 World Population Data Sheet, Population Reference Bureau, Washington, DC List of Acronyms ACEIT – Automated Cost Estimating Integrated
Read, Jessica; Pincus, Tamar
2004-12-01
Depressive symptoms are common in chronic pain. Previous research has found differences in information-processing biases in depressed pain patients and depressed people without pain. The schema enmeshment model of pain (SEMP) has been proposed to explain chronic pain patients' information-processing biases. Negative future thinking is common in depression but has not been explored in relation to chronic pain and information-processing models. The study aimed to test the SEMP with reference to future thinking. An information-processing paradigm compared endorsement and recall bias between depressed and non-depressed chronic low back pain patients and control participants. Twenty-five depressed and 35 non-depressed chronic low back pain patients and 25 control participants (student osteopaths) were recruited from an osteopathy practice. Participants were asked to endorse positive and negative ill-health, depression-related, and neutral (control) adjectives, encoded in reference to either current or future time-frame. Incidental recall of the adjectives was then tested. While the expected hypothesis of a recall bias by depressed pain patients towards ill-health stimuli in the current condition was confirmed, the recall bias was not present in the future condition. Additionally, patterns of endorsement and recall bias differed. Results extend understanding of future thinking in chronic pain within the context of the SEMP.
Self-Reference Acts as a Golden Thread in Binding.
Sui, Jie
2016-07-01
In a recent article in this journal, Glyn Humphreys and I proposed a model of how self-reference enhances binding in perception and cognition [1]. We showed that self-reference changes particular functional processes; notably, self-reference increases binding between the features of stimuli and between different stages of processing. Lane and colleagues [2] provide an interesting comment on our article that suggests our theory of self-reference is compatible with Dennett's philosophical perspective on the narrative nature of the self. Although the nature of the self has attracted the attention of both philosophers and scientists, the two disciplines have generated different perspectives on the functions of the self, largely due to their different methodologies. For example, Dennett argues that the self is constituted through human narration on experience [3]. By contrast, work from psychologists and cognitive neuroscientists focuses on the functional and neural mechanisms of self-reference. Copyright © 2016 Elsevier Ltd. All rights reserved.
Air Force Systems Engineering Assessment Model (AF SEAM) Management Guide, Version 2
2010-09-21
gleaned from experienced professionals who assisted with the model’s development. Examples of the references used include the following: • ISO /IEC...Defense Acquisition Guidebook, Chapter 4 • AFI 63-1201, Life Cycle Systems Engineering • IEEE/EIA 12207 , Software Life Cycle Processes • Air...Selection criteria Reference Material: IEEE/EIA 12207 , MIL-HDBK-514 Other Considerations: Modeling, simulation and analysis techniques can be
An approach for formalising the supply chain operations
NASA Astrophysics Data System (ADS)
Zdravković, Milan; Panetto, Hervé; Trajanović, Miroslav; Aubry, Alexis
2011-11-01
Reference models play an important role in the knowledge management of the various complex collaboration domains (such as supply chain networks). However, they often show a lack of semantic precision and, they are sometimes incomplete. In this article, we present an approach to overcome semantic inconsistencies and incompleteness of the Supply Chain Operations Reference (SCOR) model and hence improve its usefulness and expand the application domain. First, we describe a literal web ontology language (OWL) specification of SCOR concepts (and related tools) built with the intention to preserve the original approach in the classification of process reference model entities, and hence enable the effectiveness of usage in original contexts. Next, we demonstrate the system for its exploitation, in specific - tools for SCOR framework browsing and rapid supply chain process configuration. Then, we describe the SCOR-Full ontology, its relations with relevant domain ontology and show how it can be exploited for improvement of SCOR ontological framework competence. Finally, we elaborate the potential impact of the presented approach, to interoperability of systems in supply chain networks.
Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry
2008-05-01
AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Simulations were performed on tomato plants to demonstrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment.
Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry
2008-01-01
Background and Aims AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. Methods The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Key Results Simulations were performed on tomato plants to demostrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. Conclusions The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment. PMID:17766310
Validating archetypes for the Multiple Sclerosis Functional Composite.
Braun, Michael; Brandt, Alexander Ulrich; Schulz, Stefan; Boeker, Martin
2014-08-03
Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions.This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model.
Validating archetypes for the Multiple Sclerosis Functional Composite
2014-01-01
Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model. PMID:25087081
DOT National Transportation Integrated Search
2013-08-01
The Texas Department of Transportation : (TxDOT) created a standardized trip-based : modeling approach for travel demand modeling : called the Texas Package Suite of Travel Demand : Models (referred to as the Texas Package) to : oversee the travel de...
Gaussian process models for reference ET estimation from alternative meteorological data sources
USDA-ARS?s Scientific Manuscript database
Accurate estimates of daily crop evapotranspiration (ET) are needed for efficient irrigation management, especially in arid and semi-arid regions where crop water demand exceeds rainfall. Daily grass or alfalfa reference ET values and crop coefficients are widely used to estimate crop water demand. ...
Background | Office of Cancer Clinical Proteomics Research
The term "proteomics" refers to a large-scale comprehensive study of a specific proteome resulting from its genome, including abundances of proteins, their variations and modifications, and interacting partners and networks in order to understand cellular processes involved. Similarly, “Cancer proteomics” refers to comprehensive analyses of proteins and their derivatives translated from a specific cancer genome using a human biospecimen or a preclinical model (e.g., cultured cell or animal model).
2009-10-07
SECTION A. BUSINESS ENVIRONMENT 1 INTRODUCTION The Strategic Mobility 21 (SM21) program is currently in the process of developing the Joint...Platform ( BPP ) which enables the ability to rapidly compose new business processes and expand the core TMS feature-set to adapt to the challenges...Reference: Strategic Mobility 21 Contract N00014-06-C-0060 Dear Paul, In accordance with the requirements of referenced contract, we are pleased to
Perceptual Integration and Differentiation of Directions in Moving Patterns
1981-08-01
ceBssay and identify by block numnbe,) o ~ b 20 ABSTRACT (Continue oil rel’erse side II necosary aid idonlty, by block number) F . A- 1981. (-7 ATTACHED...process, are discussed. REFERENCES Mather, G. and Moulden, B . A simultaneous shift in apparent direction: Further evidence for a "distribution- shift" model...summing process, are discussed. REFERENCES Mather, G. and Moulden, B . A simultaneous shift in apparent direction: Further evidence for a "distribution
ERIC Educational Resources Information Center
King, Roger
2010-01-01
This article analyzes policy convergence and the adoption of globalizing models by higher education states, a process we describe, following Thatcher (2007), as policy internationalization. This refers to processes found in many policy domains and which increasingly are exemplified in tertiary education systems too. The focus is on governmental…
Hybrid pregnant reference phantom series based on adult female ICRP reference phantom
NASA Astrophysics Data System (ADS)
Rafat-Motavalli, Laleh; Miri-Hakimabad, Hashem; Hoseinian-Azghadi, Elie
2018-03-01
This paper presents boundary representation (BREP) models of pregnant female and her fetus at the end of each trimester. The International Commission on Radiological Protection (ICRP) female reference voxel phantom was used as a base template in development process of the pregnant hybrid phantom series. The differences in shape and location of the displaced maternal organs caused by enlarging uterus were also taken into account. The CT and MR images of fetus specimens and pregnant patients of various ages were used to replace the maternal abdominal pelvic organs of template phantom and insert the fetus inside the gravid uterus. Each fetal model contains 21 different organs and tissues. The skeletal model of the fetus also includes age-dependent cartilaginous and ossified skeletal components. The replaced maternal organ models were converted to NURBS surfaces and then modified to conform to reference values of ICRP Publication 89. The particular feature of current series compared to the previously developed pregnant phantoms is being constructed upon the basis of ICRP reference phantom. The maternal replaced organ models are NURBS surfaces. With this great potential, they might have the feasibility of being converted to high quality polygon mesh phantoms.
CD-SEM real time bias correction using reference metrology based modeling
NASA Astrophysics Data System (ADS)
Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.
2018-03-01
Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.
Dealing with the Archetypes Development Process for a Regional EHR System
Santos, M.R.; Bax, M.P.; Kalra, D.
2012-01-01
Objectives This paper aims to present the archetype modelling process used for the Health Department of Minas Gerais State, Brazil (SES/MG), to support building its regional EHR system, and the lessons learned during this process. Methods This study was undertaken within the Minas Gerais project. The EHR system architecture was built assuming the reference model from the ISO 13606 norm. The whole archetype development process took about ten months, coordinated by a clinical team co-ordinated by three health professionals and one systems analyst from the SES/MG. They were supported by around 30 health professionals from the internal SES/MG areas, and 5 systems analysts from the PRODEMGE. Based on a bottom-up approach, the project team used technical interviews and brainstorming sessions to conduct the modelling process. Results The main steps of the archetype modelling process were identified and described, and 20 archetypes were created. Lessons learned: –The set of principles established during the selection of PCS elements helped the clinical team to keep the focus in their objectives;–The initial focus on the archetype structural organization aspects was important;–The data elements identified were subjected to a rigorous analysis aimed at determining the most suitable clinical domain;–Levelling the concepts to accommodate them within the hierarchical levels in the reference model was definitely no easy task, and the use of a mind mapping tool facilitated the modelling process;–Part of the difficulty experienced by the clinical team was related to a view focused on the original forms previously used;–The use of worksheets facilitated the modelling process by health professionals;–It was important to have a health professional that knew about the domain tables and health classifications from the Brazilian Federal Government as member in the clinical team. Conclusion The archetypes (referencing terminology, domain tables and term lists) provided a favorable condition for the use of a controlled vocabulary between the central repository and the EMR systems and, probably, will increase the chances of preserving the semantics from the knowledge domain. Finally, the reference model from the ISO 13606 norm, along with the archetypes, proved sufficient to meet the specificities for the creation of an EHR system for basic healthcare in a Brazilian state. PMID:23646075
NASA Astrophysics Data System (ADS)
Zou, X.; Deng, Z.; Ge, M.; Dick, G.; Jiang, W.; Liu, J.
2010-07-01
In order to obtain crustal deformations of higher spatial resolution, existing GPS networks must be densified. This densification can be carried out using single-frequency receivers at moderate costs. However, ionospheric delay handling is required in the data processing. We adapt the Satellite-specific Epoch-differenced Ionospheric Delay model (SEID) for GPS networks with mixed single- and dual-frequency receivers. The SEID model is modified to utilize the observations from the three nearest dual-frequency reference stations in order to avoid contaminations from more remote stations. As data of only three stations are used, an efficient missing data constructing approach with polynomial fitting is implemented to minimize data losses. Data from large scale reference networks extended with single-frequency receivers can now be processed, based on the adapted SEID model. A new data processing scheme is developed in order to make use of existing GPS data processing software packages without any modifications. This processing scheme is evaluated using a sub-network of the German SAPOS network. The results verify that the new scheme provides an efficient way to densify existing GPS networks with single-frequency receivers.
2009-02-01
range of modal analysis and the high frequency region of statistical energy analysis , is referred to as the mid-frequency range. The corresponding...frequency range of modal analysis and the high frequency region of statistical energy analysis , is referred to as the mid-frequency range. The...predictions. The averaging process is consistent with the averaging done in statistical energy analysis for stochastic systems. The FEM will always
An evaluation of light intensity functions for determination of shaded reference stream metabolism.
Zell, Chris; Hubbart, Jason A
2012-04-30
The performance of three single-station whole stream metabolism models were evaluated within three shaded, seasonally hypoxic, Missouri reference streams using high resolution (15-minute) dissolved oxygen (DO), temperature, and light intensity data collected during the summers (July-September) of 2006-2008. The model incorporating light intensity data consistently achieved a lower root mean square error (median RMSE = 0.20 mg L(-1)) relative to models assuming sinusoidal light intensity functions (median RMSE = 0.28 mg L(-1)) and constant diel temperature (median RMSE = 0.53 mg L(-1)). Incorporation of site-specific light intensity into metabolism models better predicted morning DO concentrations and exposure to hypoxic conditions in shaded study streams. Model choice significantly affected (p < 0.05) rate estimates for daily average photosynthesis. Low reaeration (pooled site mean 1.1 day(-1) at 20 °C) coupled with summer temperatures (pooled site mean = 25.8 °C) and low to moderate community respiration (site median 1.0-3.0 g O(2) m(-2) day(-1)) yielded diel dissolved oxygen concentrations near or below critical aquatic life thresholds in studied reference streams. Quantifying these process combinations in best-available or least-disturbed (i.e., reference) systems advances our understanding of regional dissolved oxygen expectations and informs environmental management policy. Additional research is warranted to better link landscape processes with distributed sources that contribute to community respiration. Copyright © 2011 Elsevier Ltd. All rights reserved.
Reference Architecture Model Enabling Standards Interoperability.
Blobel, Bernd
2017-01-01
Advanced health and social services paradigms are supported by a comprehensive set of domains managed by different scientific disciplines. Interoperability has to evolve beyond information and communication technology (ICT) concerns, including the real world business domains and their processes, but also the individual context of all actors involved. So, the system must properly reflect the environment in front and around the computer as essential and even defining part of the health system. This paper introduces an ICT-independent system-theoretical, ontology-driven reference architecture model allowing the representation and harmonization of all domains involved including the transformation into an appropriate ICT design and implementation. The entire process is completely formalized and can therefore be fully automated.
Using Dual Process Models to Examine Impulsivity Throughout Neural Maturation.
Leshem, Rotem
2016-01-01
The multivariate construct of impulsivity is examined through neural systems and connections that comprise the executive functioning system. It is proposed that cognitive and behavioral components of impulsivity can be divided into two distinct groups, mediated by (1) the cognitive control system: deficits in top-down cognitive control processes referred to as action/cognitive impulsivity and (2) the socioemotional system: related to bottom-up affective/motivational processes referred to as affective impulsivity. Examination of impulsivity from a developmental viewpoint can guide future research, potentially enabling the selection of more effective interventions for impulsive individuals, based on the cognitive components requiring improvement.
Scatterometry-based metrology for SAQP pitch walking using virtual reference
NASA Astrophysics Data System (ADS)
Kagalwala, Taher; Vaid, Alok; Mahendrakar, Sridhar; Lenahan, Michael; Fang, Fang; Isbester, Paul; Shifrin, Michael; Etzioni, Yoav; Cepler, Aron; Yellai, Naren; Dasari, Prasad; Bozdog, Cornel
2016-03-01
Advanced technology nodes, 10nm and beyond, employing multi-patterning techniques for pitch reduction pose new process and metrology challenges in maintaining consistent positioning of structural features. Self-Aligned Quadruple Patterning (SAQP) process is used to create the Fins in FinFET devices with pitch values well below optical lithography limits. The SAQP process bares compounding effects from successive Reactive Ion Etch (RIE) and spacer depositions. These processes induce a shift in the pitch value from one fin compared to another neighboring fin. This is known as pitch walking. Pitch walking affects device performance as well as later processes which work on an assumption that there is consistent spacing between fins. In SAQP there are 3 pitch walking parameters of interest, each linked to specific process steps in the flow. These pitch walking parameters are difficult to discriminate at a specific process step by singular evaluation technique or even with reference metrology such as Transmission Electron Microscopy (TEM). In this paper we will utilize a virtual reference to generate a scatterometry model to measure pitch walk for SAQP process flow.
NASA Astrophysics Data System (ADS)
Kagalwala, Taher; Vaid, Alok; Mahendrakar, Sridhar; Lenahan, Michael; Fang, Fang; Isbester, Paul; Shifrin, Michael; Etzioni, Yoav; Cepler, Aron; Yellai, Naren; Dasari, Prasad; Bozdog, Cornel
2016-10-01
Advanced technology nodes, 10 nm and beyond, employing multipatterning techniques for pitch reduction pose new process and metrology challenges in maintaining consistent positioning of structural features. A self-aligned quadruple patterning (SAQP) process is used to create the fins in FinFET devices with pitch values well below optical lithography limits. The SAQP process bears the compounding effects from successive reactive ion etch and spacer depositions. These processes induce a shift in the pitch value from one fin compared to another neighboring fin. This is known as pitch walking. Pitch walking affects device performance as well as later processes, which work on an assumption that there is consistent spacing between fins. In SAQP, there are three pitch walking parameters of interest, each linked to specific process steps in the flow. These pitch walking parameters are difficult to discriminate at a specific process step by singular evaluation technique or even with reference metrology, such as transmission electron microscopy. We will utilize a virtual reference to generate a scatterometry model to measure pitch walk for SAQP process flow.
MENA 1.1 - An Updated Geophysical Regionalization of the Middle East and North Africa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walters, B.; Pasyanos, M.E.; Bhattacharyya, J.
2000-03-01
This short report provides an update to the earlier LLNL paper entitled ''Preliminary Definition of Geophysical Regions for the Middle East and North Africa'' (Sweeney and Walter, 1998). This report is designed to be used in combination with that earlier paper. The reader is referred to Sweeney and Walter (1998) for all details, including definitions, references, uses, shortcomings, etc., of the regionalization process. In this report we will discuss only those regions in which we have changed the boundaries or velocity structure from that given by the original paper. The paper by Sweeney and Walter (1998) drew on a varietymore » of sources to estimate a preliminary, first-order regionalization of the Middle East and North Africa (MENA), providing regional boundaries and velocity models within each region. The model attempts to properly account for major structural discontinuities and significant crustal thickness and velocity variations on a gross scale. The model can be used to extrapolate sparse calibration data within a distinct geophysical region. This model can also serve as a background model in the process of forming station calibration maps using intelligent interpolation techniques such as kriging, extending the calibration into aseismic areas. Such station maps can greatly improve the ability to locate and identify seismic events, which in turn improves the ability to seismically monitor for underground nuclear testing. The original model from Sweeney and Walter (1998) was digitized to a 1{sup o} resolution, for simplicity we will hereafter refer to this model as MENA 1.0. The new model described here has also been digitized to a 1{sup o} resolution and will be referred to as MENA1.1 throughout this report.« less
Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli
2017-07-01
As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.
Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M
2015-11-01
This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.
NKG201xGIA - first results for a new model of glacial isostatic adjustment in Fennoscandia
NASA Astrophysics Data System (ADS)
Steffen, Holger; Barletta, Valentina; Kollo, Karin; Milne, Glenn A.; Nordman, Maaria; Olsson, Per-Anders; Simpson, Matthew J. R.; Tarasov, Lev; Ågren, Jonas
2016-04-01
Glacial isostatic adjustment (GIA) is a dominant process in northern Europe, which is observed with several geodetic and geophysical methods. The observed land uplift due to this process amounts to about 1 cm/year in the northern Gulf of Bothnia. GIA affects the establishment and maintenance of reliable geodetic and gravimetric reference networks in the Nordic countries. To support a high level of accuracy in the determination of position, adequate corrections have to be applied with dedicated models. Currently, there are efforts within a Nordic Geodetic Commission (NKG) activity towards a model of glacial isostatic adjustment for Fennoscandia. The new model, NKG201xGIA, to be developed in the near future will complement the forthcoming empirical NKG land uplift model, which will substitute the currently used empirical land uplift model NKG2005LU (Ågren & Svensson, 2007). Together, the models will be a reference for vertical and horizontal motion, gravity and geoid change and more. NKG201xGIA will also provide uncertainty estimates for each field. Following former investigations, the GIA model is based on a combination of an ice and an earth model. The selected reference ice model, GLAC, for Fennoscandia, the Barents/Kara seas and the British Isles is provided by Lev Tarasov and co-workers. Tests of different ice and earth models will be performed based on the expertise of each involved modeler. This includes studies on high resolution ice sheets, different rheologies, lateral variations in lithosphere and mantle viscosity and more. This will also be done in co-operation with scientists outside NKG who help in the development and testing of the model. References Ågren, J., Svensson, R. (2007): Postglacial Land Uplift Model and System Definition for the New Swedish Height System RH 2000. Reports in Geodesy and Geographical Information Systems Rapportserie, LMV-Rapport 4, Lantmäteriet, Gävle.
NASA Astrophysics Data System (ADS)
Bobojc, Andrzej; Drozyner, Andrzej
2016-04-01
This work contains a comparative study of performance of twenty geopotential models in an orbit estimation process of the satellite of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) mission. For testing, among others, such models as JYY_GOCE02S, ITG-GOCE02, ULUX_CHAMP2013S, GOGRA02S, ITG-GRACE2010S, EIGEN-51C, EGM2008, EGM96, JGM3, OSU91a, OSU86F were adopted. A special software package, called the Orbital Computation System (OCS), based on the classical method of least squares was used. In the frame of OCS, initial satellite state vector components are corrected in an iterative process, using the given geopotential model and the models describing the remaining gravitational perturbations. An important part of the OCS package is the 8th order Cowell numerical integration procedure, which enables a satellite orbit computation. Different sets of pseudorange simulations along reference GOCE satellite orbital arcs were obtained using real orbits of the Global Positioning System (GPS) satellites. These sets were the basic observation data used in the adjustment. The centimeter-accuracy Precise Science Orbit (PSO) for the GOCE satellite provided by the European Space Agency (ESA) was adopted as the GOCE reference orbit. Comparing various variants of the orbital solutions, the relative accuracy of geopotential models in an orbital aspect is determined. Full geopotential models were used in the adjustment process. However, the solutions were also determined taking into account truncated geopotential models. In such case, an accuracy of the orbit estimated was slightly enhanced. The obtained solutions refer to the orbital arcs with the lengths of 90-minute and 1-day.
Short Pulse UV-Visible Waveguide Laser.
1980-07-01
27 B. Relaxation Processes ...... ................... ... 30 C. Equivalent Circuit ...... .................... ... 33 II V. KINETIC MODELING...101 2 2-() 0 10 20 30 40 TIME (nsec) Fig. 6 Temporal evolution of the current, various N +densities, and the electron density as revealed by the...processes consisting of dissociative 30 * TABLE 1 RELAXATION REACTION RATES USED IN THE He-N MODEL 2 Reaction Rate. Reference Helium Metastable Reactions 1
Urschler, Martin; Höller, Johannes; Bornik, Alexander; Paul, Tobias; Giretzlehner, Michael; Bischof, Horst; Yen, Kathrin; Scheurer, Eva
2014-08-01
The increasing use of CT/MR devices in forensic analysis motivates the need to present forensic findings from different sources in an intuitive reference visualization, with the aim of combining 3D volumetric images along with digital photographs of external findings into a 3D computer graphics model. This model allows a comprehensive presentation of forensic findings in court and enables comparative evaluation studies correlating data sources. The goal of this work was to investigate different methods to generate anonymous and patient-specific 3D models which may be used as reference visualizations. The issue of registering 3D volumetric as well as 2D photographic data to such 3D models is addressed to provide an intuitive context for injury documentation from arbitrary modalities. We present an image processing and visualization work-flow, discuss the major parts of this work-flow, compare the different investigated reference models, and show a number of cases studies that underline the suitability of the proposed work-flow for presenting forensically relevant information in 3D visualizations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Li, Yihe; Li, Bofeng; Gao, Yang
2015-01-01
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400
Li, Yihe; Li, Bofeng; Gao, Yang
2015-11-30
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.
de Lusignan, S; Krause, P; Michalakidis, G; Vicente, M Tristan; Thompson, S; McGilchrist, M; Sullivan, F; van Royen, P; Agreus, L; Desombre, T; Taweel, A; Delaney, B
2012-01-01
To perform a requirements analysis of the barriers to conducting research linking of primary care, genetic and cancer data. We extended our initial data-centric approach to include socio-cultural and business requirements. We created reference models of core data requirements common to most studies using unified modelling language (UML), dataflow diagrams (DFD) and business process modelling notation (BPMN). We conducted a stakeholder analysis and constructed DFD and UML diagrams for use cases based on simulated research studies. We used research output as a sensitivity analysis. Differences between the reference model and use cases identified study specific data requirements. The stakeholder analysis identified: tensions, changes in specification, some indifference from data providers and enthusiastic informaticians urging inclusion of socio-cultural context. We identified requirements to collect information at three levels: micro- data items, which need to be semantically interoperable, meso- the medical record and data extraction, and macro- the health system and socio-cultural issues. BPMN clarified complex business requirements among data providers and vendors; and additional geographical requirements for patients to be represented in both linked datasets. High quality research output was the norm for most repositories. Reference models provide high-level schemata of the core data requirements. However, business requirements' modelling identifies stakeholder issues and identifies what needs to be addressed to enable participation.
NASA Technical Reports Server (NTRS)
Nashman, Marilyn; Chaconas, Karen J.
1988-01-01
The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.
Evaluation of computing systems using functionals of a Stochastic process
NASA Technical Reports Server (NTRS)
Meyer, J. F.; Wu, L. T.
1980-01-01
An intermediate model was used to represent the probabilistic nature of a total system at a level which is higher than the base model and thus closer to the performance variable. A class of intermediate models, which are generally referred to as functionals of a Markov process, were considered. A closed form solution of performability for the case where performance is identified with the minimum value of a functional was developed.
The Dairy Greenhouse Gas Emission Model: Reference Manual
USDA-ARS?s Scientific Manuscript database
The Dairy Greenhouse Gas Model (DairyGHG) is a software tool for estimating the greenhouse gas emissions and carbon footprint of dairy production systems. A relatively simple process-based model is used to predict the primary greenhouse gas emissions, which include the net emission of carbon dioxide...
Using Goal Interactions to Guide Planning: The Program Model.
1987-04-01
8217.. ’:, :: ,:,: :... .-.. , .. *, -..,.,.. .. . . -. ’. ... -. ’. . -..: . . ... . .. ." 16 References [1] Berenji , H. R.; B. Khoshnevis. Use of Artificial Intelligence in Automated Process Planning
Space Shuttle propulsion performance reconstruction from flight data
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The aplication of extended Kalman filtering to estimating Space Shuttle Solid Rocket Booster (SRB) performance, specific impulse, from flight data in a post-flight processing computer program. The flight data used includes inertial platform acceleration, SRB head pressure, and ground based radar tracking data. The key feature in this application is the model used for the SRBs, which represents a reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are included.
A methodology and supply chain management inspired reference ontology for modeling healthcare teams.
Kuziemsky, Craig E; Yazdi, Sara
2011-01-01
Numerous studies and strategic plans are advocating more team based healthcare delivery that is facilitated by information and communication technologies (ICTs). However before we can design ICTs to support teams we need a solid conceptual model of team processes and a methodology for using such a model in healthcare settings. This paper draws upon success in the supply chain management domain to develop a reference ontology of healthcare teams and a methodology for modeling teams to instantiate the ontology in specific settings. This research can help us understand how teams function and how we can design ICTs to support teams.
A reference model for space data system interconnection services
NASA Astrophysics Data System (ADS)
Pietras, John; Theis, Gerhard
1993-03-01
The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).
A reference model for space data system interconnection services
NASA Technical Reports Server (NTRS)
Pietras, John; Theis, Gerhard
1993-01-01
The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).
DoD Acquisition Workforce Education: An SBA Education Case Study
ERIC Educational Resources Information Center
Davenport, Richard W.
2009-01-01
A Department of Defense (DoD) M&S education task force is in the process of studying the Modeling and Simulation (M&S) education of the acquisition workforce. Historically, DoD acquisition workforce education is not referred to as education, but rather what the Defense Acquisition University (DAU) refers to as "practitioner training, career…
76 FR 21815 - Airworthiness Directives; The Boeing Company Model 737 Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-19
...)(c) of this service bulletin provides instructions to check for obvious differences in thread shape between thread grooves ``as given in CMM 27-41-01.'' Air Tran noted that CMM 27-41-01 does not provide any... have revised this AD to refer to the new service information. We agree that processes referred to by...
Mechanisms of Reference Frame Selection in Spatial Term Use: Computational and Empirical Studies
ERIC Educational Resources Information Center
Schultheis, Holger; Carlson, Laura A.
2017-01-01
Previous studies have shown that multiple reference frames are available and compete for selection during the use of spatial terms such as "above." However, the mechanisms that underlie the selection process are poorly understood. In the current paper we present two experiments and a comparison of three computational models of selection…
ERIC Educational Resources Information Center
Young, I. Phillip; Young, Karen Holsey; Okhremtchouk, Irina; Castaneda, Jose Moreno
2009-01-01
Pay satisfaction was assessed according to different facets (pay level, benefits, pay structure, and pay raises) and potential referent groups (teachers and elementary school principals) for a random sample of male elementary school principals. A structural model approach was used that considers facets of the pay process, potential others as…
The environmental zero-point problem in evolutionary reaction norm modeling.
Ergon, Rolf
2018-04-01
There is a potential problem in present quantitative genetics evolutionary modeling based on reaction norms. Such models are state-space models, where the multivariate breeder's equation in some form is used as the state equation that propagates the population state forward in time. These models use the implicit assumption of a constant reference environment, in many cases set to zero. This zero-point is often the environment a population is adapted to, that is, where the expected geometric mean fitness is maximized. Such environmental reference values follow from the state of the population system, and they are thus population properties. The environment the population is adapted to, is, in other words, an internal population property, independent of the external environment. It is only when the external environment coincides with the internal reference environment, or vice versa, that the population is adapted to the current environment. This is formally a result of state-space modeling theory, which is an important theoretical basis for evolutionary modeling. The potential zero-point problem is present in all types of reaction norm models, parametrized as well as function-valued, and the problem does not disappear when the reference environment is set to zero. As the environmental reference values are population characteristics, they ought to be modeled as such. Whether such characteristics are evolvable is an open question, but considering the complexity of evolutionary processes, such evolvability cannot be excluded without good arguments. As a straightforward solution, I propose to model the reference values as evolvable mean traits in their own right, in addition to other reaction norm traits. However, solutions based on an evolvable G matrix are also possible.
A Neurobehavioral Model of Flexible Spatial Language Behaviors
ERIC Educational Resources Information Center
Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schoner, Gregor
2012-01-01
We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that…
Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.
ERIC Educational Resources Information Center
Gleason, John M.
1993-01-01
This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)
Yu, Hui; Qi, Dan; Li, Heng-da; Xu, Ke-xin; Yuan, Wei-jie
2012-03-01
Weak signal, low instrument signal-to-noise ratio, continuous variation of human physiological environment and the interferences from other components in blood make it difficult to extract the blood glucose information from near infrared spectrum in noninvasive blood glucose measurement. The floating-reference method, which analyses the effect of glucose concentration variation on absorption coefficient and scattering coefficient, gets spectrum at the reference point and the measurement point where the light intensity variations from absorption and scattering are counteractive and biggest respectively. By using the spectrum from reference point as reference, floating-reference method can reduce the interferences from variation of physiological environment and experiment circumstance. In the present paper, the effectiveness of floating-reference method working on improving prediction precision and stability was assessed through application experiments. The comparison was made between models whose data were processed with and without floating-reference method. The results showed that the root mean square error of prediction (RMSEP) decreased by 34.7% maximally. The floating-reference method could reduce the influences of changes of samples' state, instrument noises and drift, and improve the models' prediction precision and stability effectively.
NASA Technical Reports Server (NTRS)
Au, Andrew Y.; Brown, Richard D.; Welker, Jean E.
1991-01-01
Satellite-based altimetric data taken by GOES-3, SEASAT, and GEOSAT over the Aral Sea, the Black Sea, and the Caspian Sea are analyzed and a least squares collocation technique is used to predict the geoid undulations on a 0.25x0.25 deg. grid and to transform these geoid undulations to free air gravity anomalies. Rapp's 180x180 geopotential model is used as the reference surface for the collocation procedure. The result of geoid to gravity transformation is, however, sensitive to the information content of the reference geopotential model used. For example, considerable detailed surface gravity data were incorporated into the reference model over the Black Sea, resulting in a reference model with significant information content at short wavelengths. Thus, estimation of short wavelength gravity anomalies from gridded geoid heights is generally reliable over regions such as the Black Sea, using the conventional collocation technique with local empirical covariance functions. Over regions such as the Caspian Sea, where detailed surface data are generally not incorporated into the reference model, unconventional techniques are needed to obtain reliable gravity anomalies. Based on the predicted gravity anomalies over these inland seas, speculative tectonic structures are identified and geophysical processes are inferred.
ERIC Educational Resources Information Center
Ehm, Jan-Henning; Lindberg, Sven; Hasselhorn, Marcus
2014-01-01
The internal/external (I/E) frame of reference model (Marsh, "Am Educ Res J" 23:129-149, 1986) conceptualizes students' self-concepts as being formed by dimensional as well as social comparison processes. In the present study, the I/E model was tested and extended in a sample of elementary school children. Core academic skills of…
A Kalman filter approach for the determination of celestial reference frames
NASA Astrophysics Data System (ADS)
Soja, Benedikt; Gross, Richard; Jacobs, Christopher; Chin, Toshio; Karbon, Maria; Nilsson, Tobias; Heinkelmann, Robert; Schuh, Harald
2017-04-01
The coordinate model of radio sources in International Celestial Reference Frames (ICRF), such as the ICRF2, has traditionally been a constant offset. While sufficient for a large part of radio sources considering current accuracy requirements, several sources exhibit significant temporal coordinate variations. In particular, the group of the so-called special handling sources is characterized by large fluctuations in the source positions. For these sources and for several from the "others" category of radio sources, a coordinate model that goes beyond a constant offset would be beneficial. However, due to the sheer amount of radio sources in catalogs like the ICRF2, and even more so with the upcoming ICRF3, it is difficult to find the most appropriate coordinate model for every single radio source. For this reason, we have developed a time series approach to the determination of celestial reference frames (CRF). We feed the radio source coordinates derived from single very long baseline interferometry (VLBI) sessions sequentially into a Kalman filter and smoother, retaining their full covariances. The estimation of the source coordinates is carried out with a temporal resolution identical to the input data, i.e. usually 1-4 days. The coordinates are assumed to behave like random walk processes, an assumption which has already successfully been made for the determination of terrestrial reference frames such as the JTRF2014. To be able to apply the most suitable process noise value for every single radio source, their statistical properties are analyzed by computing their Allan standard deviations (ADEV). Additional to the determination of process noise values, the ADEV allows drawing conclusions whether the variations in certain radio source positions significantly deviate from random walk processes. Our investigations also deal with other means of source characterization, such as the structure index, in order to derive a suitable process noise model. The Kalman filter CRFs resulting from the different approaches are compared among each other, to the original radio source position time series, as well as to a traditional CRF solution, in which the constant source positions are estimated in a global least squares adjustment.
Users manual for a one-dimensional Lagrangian transport model
Schoellhamer, D.H.; Jobson, H.E.
1986-01-01
A Users Manual for the Lagrangian Transport Model (LTM) is presented. The LTM uses Lagrangian calculations that are based on a reference frame moving with the river flow. The Lagrangian reference frame eliminates the need to numerically solve the convective term of the convection-diffusion equation and provides significant numerical advantages over the more commonly used Eulerian reference frame. When properly applied, the LTM can simulate riverine transport and decay processes within the accuracy required by most water quality studies. The LTM is applicable to steady or unsteady one-dimensional unidirectional flows in fixed channels with tributary and lateral inflows. Application of the LTM is relatively simple and optional capabilities improve the model 's convenience. Appendices give file formats and three example LTM applications that include the incorporation of the QUAL II water quality model 's reaction kinetics into the LTM. (Author 's abstract)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mobrand, Lars Erik; Lestelle, Lawrence C.
In the spring of 1994 a technical planning support project was initiated by the Grande Ronde Model Watershed Board of Directors (Board) with funding from the Bonneville Power Administration. The project was motivated by a need for a science based method for prioritizing restoration actions in the basin that would promote effectiveness and accountability. In this section the authors recall the premises for the project. The authors also present a set of recommendations for implementing a watershed planning process that incorporates a science-based framework to help guide decision making. This process is intended to assist the Grande Ronde Model Watershedmore » Board in its effort to plan and implement watershed improvement measures. The process would also assist the Board in coordinating its efforts with other entities in the region. The planning process is based on an approach for developing an ecosystem management strategy referred to as the Ecosystem Diagnosis and Treatment (EDT) method (Lichatowich et al. 1995, Lestelle et al. 1996). The process consists of an on-going planning cycle. Included in this cycle is an assessment of the ability of the watershed to support and sustain natural resources and other economic and societal values. This step in the process, which the authors refer to as the diagnosis, helps guide the development of actions (also referred to as treatments) aimed at improving the conditions of the watershed to achieve long-term objectives. The planning cycle calls for routinely reviewing and updating, as necessary, the basis for the diagnosis and other analyses used by the Board in adopting actions for implementation. The recommendations offered here address this critical need to habitually update the information used in setting priorities for action.« less
Gambling scores for earthquake predictions and forecasts
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang
2010-04-01
This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
Can the History of Science Contribute to Modelling in Physics Teaching?
NASA Astrophysics Data System (ADS)
Machado, Juliana; Braga, Marco Antônio Barbosa
2016-10-01
A characterization of the modelling process in science is proposed for science education, based on Mario Bunge's ideas about the construction of models in science. Galileo's Dialogues are analysed as a potentially fruitful starting point to implement strategies aimed at modelling in the classroom in the light of that proposal. It is argued that a modelling process for science education can be conceived as the evolution from phenomenological approaches towards more representational ones, emphasizing the role of abstraction and idealization in model construction. The shift of reference of theories—from sensible objects to conceptual objects—and the black-box models construction process, which are both explicitly presented features in Galileo's Dialogues, are indicated as highly relevant aspects for modelling in science education.
Counter Unmanned Aerial System Decision-Aid Logic Process (C-UAS DALP)
decision -aid or logic process that bridges the middle elements of the kill... of use, location, general logic process , and reference mission. This is the framework for the IDEF0 functional architecture diagrams, decision -aid diagrams, logic process , and modeling and simulation....chain between detection to countermeasure response. This capstone project creates the logic for a decision process that transitions from the
Sittig, Dean F.; Singh, Hardeep
2011-01-01
Conceptual models have been developed to address challenges inherent in studying health information technology (HIT). This manuscript introduces an 8-dimensional model specifically designed to address the socio-technical challenges involved in design, development, implementation, use, and evaluation of HIT within complex adaptive healthcare systems. The 8 dimensions are not independent, sequential, or hierarchical, but rather are interdependent and interrelated concepts similar to compositions of other complex adaptive systems. Hardware and software computing infrastructure refers to equipment and software used to power, support, and operate clinical applications and devices. Clinical content refers to textual or numeric data and images that constitute the “language” of clinical applications. The human computer interface includes all aspects of the computer that users can see, touch, or hear as they interact with it. People refers to everyone who interacts in some way with the system, from developer to end-user, including potential patient-users. Workflow and communication are the processes or steps involved in assuring that patient care tasks are carried out effectively. Two additional dimensions of the model are internal organizational features (e.g., policies, procedures, and culture) and external rules and regulations, both of which may facilitate or constrain many aspects of the preceding dimensions. The final dimension is measurement and monitoring, which refers to the process of measuring and evaluating both intended and unintended consequences of HIT implementation and use. We illustrate how our model has been successfully applied in real-world complex adaptive settings to understand and improve HIT applications at various stages of development and implementation. PMID:20959322
Sittig, Dean F; Singh, Hardeep
2010-10-01
Conceptual models have been developed to address challenges inherent in studying health information technology (HIT). This manuscript introduces an eight-dimensional model specifically designed to address the sociotechnical challenges involved in design, development, implementation, use and evaluation of HIT within complex adaptive healthcare systems. The eight dimensions are not independent, sequential or hierarchical, but rather are interdependent and inter-related concepts similar to compositions of other complex adaptive systems. Hardware and software computing infrastructure refers to equipment and software used to power, support and operate clinical applications and devices. Clinical content refers to textual or numeric data and images that constitute the 'language' of clinical applications. The human--computer interface includes all aspects of the computer that users can see, touch or hear as they interact with it. People refers to everyone who interacts in some way with the system, from developer to end user, including potential patient-users. Workflow and communication are the processes or steps involved in ensuring that patient care tasks are carried out effectively. Two additional dimensions of the model are internal organisational features (eg, policies, procedures and culture) and external rules and regulations, both of which may facilitate or constrain many aspects of the preceding dimensions. The final dimension is measurement and monitoring, which refers to the process of measuring and evaluating both intended and unintended consequences of HIT implementation and use. We illustrate how our model has been successfully applied in real-world complex adaptive settings to understand and improve HIT applications at various stages of development and implementation.
Verhulst, Sarah; Altoè, Alessandro; Vasilkov, Viacheslav
2018-03-01
Models of the human auditory periphery range from very basic functional descriptions of auditory filtering to detailed computational models of cochlear mechanics, inner-hair cell (IHC), auditory-nerve (AN) and brainstem signal processing. It is challenging to include detailed physiological descriptions of cellular components into human auditory models because single-cell data stems from invasive animal recordings while human reference data only exists in the form of population responses (e.g., otoacoustic emissions, auditory evoked potentials). To embed physiological models within a comprehensive human auditory periphery framework, it is important to capitalize on the success of basic functional models of hearing and render their descriptions more biophysical where possible. At the same time, comprehensive models should capture a variety of key auditory features, rather than fitting their parameters to a single reference dataset. In this study, we review and improve existing models of the IHC-AN complex by updating their equations and expressing their fitting parameters into biophysical quantities. The quality of the model framework for human auditory processing is evaluated using recorded auditory brainstem response (ABR) and envelope-following response (EFR) reference data from normal and hearing-impaired listeners. We present a model with 12 fitting parameters from the cochlea to the brainstem that can be rendered hearing impaired to simulate how cochlear gain loss and synaptopathy affect human population responses. The model description forms a compromise between capturing well-described single-unit IHC and AN properties and human population response features. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Jones, D. W.
1971-01-01
The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.
Validation of Western North America Models based on finite-frequency and ray theory imaging methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larmat, Carene; Maceira, Monica; Porritt, Robert W.
2015-02-02
We validate seismic models developed for western North America with a focus on effect of imaging methods on data fit. We use the DNA09 models for which our collaborators provide models built with both the body-wave FF approach and the RT approach, when the data selection, processing and reference models are the same.
Validation of psychoanalytic theories: towards a conceptualization of references.
Zachrisson, Anders; Zachrisson, Henrik Daae
2005-10-01
The authors discuss criteria for the validation of psychoanalytic theories and develop a heuristic and normative model of the references needed for this. Their core question in this paper is: can psychoanalytic theories be validated exclusively from within psychoanalytic theory (internal validation), or are references to sources of knowledge other than psychoanalysis also necessary (external validation)? They discuss aspects of the classic truth criteria correspondence and coherence, both from the point of view of contemporary psychoanalysis and of contemporary philosophy of science. The authors present arguments for both external and internal validation. Internal validation has to deal with the problems of subjectivity of observations and circularity of reasoning, external validation with the problem of relevance. They recommend a critical attitude towards psychoanalytic theories, which, by carefully scrutinizing weak points and invalidating observations in the theories, reduces the risk of wishful thinking. The authors conclude by sketching a heuristic model of validation. This model combines correspondence and coherence with internal and external validation into a four-leaf model for references for the process of validating psychoanalytic theories.
NASA Astrophysics Data System (ADS)
Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian
2017-04-01
This paper proposes the combination of two model-free controller tuning techniques, namely linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning, referred to as a new mixed VRFT-Q learning approach. VRFT is first used to find stabilising feedback controller using input-output experimental data from the process in a model reference tracking setting. Reinforcement Q-learning is next applied in the same setting using input-state experimental data collected under perturbed VRFT to ensure good exploration. The Q-learning controller learned with a batch fitted Q iteration algorithm uses two neural networks, one for the Q-function estimator and one for the controller, respectively. The VRFT-Q learning approach is validated on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS). Extensive simulations for the two independent control channels of the MIMO AS show that the Q-learning controllers clearly improve performance over the VRFT controllers.
Leveraging the UML Metamodel: Expressing ORM Semantics Using a UML Profile
DOE Office of Scientific and Technical Information (OSTI.GOV)
CUYLER,DAVID S.
2000-11-01
Object Role Modeling (ORM) techniques produce a detailed domain model from the perspective of the business owner/customer. The typical process begins with a set of simple sentences reflecting facts about the business. The output of the process is a single model representing primarily the persistent information needs of the business. This type of model contains little, if any reference to a targeted computerized implementation. It is a model of business entities not of software classes. Through well-defined procedures, an ORM model can be transformed into a high quality objector relational schema.
Model for spectral and chromatographic data
Jarman, Kristin [Richland, WA; Willse, Alan [Richland, WA; Wahl, Karen [Richland, WA; Wahl, Jon [Richland, WA
2002-11-26
A method and apparatus using a spectral analysis technique are disclosed. In one form of the invention, probabilities are selected to characterize the presence (and in another form, also a quantification of a characteristic) of peaks in an indexed data set for samples that match a reference species, and other probabilities are selected for samples that do not match the reference species. An indexed data set is acquired for a sample, and a determination is made according to techniques exemplified herein as to whether the sample matches or does not match the reference species. When quantification of peak characteristics is undertaken, the model is appropriately expanded, and the analysis accounts for the characteristic model and data. Further techniques are provided to apply the methods and apparatuses to process control, cluster analysis, hypothesis testing, analysis of variance, and other procedures involving multiple comparisons of indexed data.
Observation-Oriented Modeling: Going beyond "Is It All a Matter of Chance"?
ERIC Educational Resources Information Center
Grice, James W.; Yepez, Maria; Wilson, Nicole L.; Shoda, Yuichi
2017-01-01
An alternative to null hypothesis significance testing is presented and discussed. This approach, referred to as observation-oriented modeling, is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. In terms of analysis, this novel approach complements traditional methods…
ERIC Educational Resources Information Center
Rousseau, Ronald
1992-01-01
Proposes a mathematical model to explain the observed concentration or diversity of nominal classes in information retrieval systems. The Lorenz Curve is discussed, Information Production Process (IPP) is explained, and a heuristic explanation of circumstances in which the model might be used is offered. (30 references) (LRW)
Testing an Instructional Model in a University Educational Setting from the Student's Perspective
ERIC Educational Resources Information Center
Betoret, Fernando Domenech
2006-01-01
We tested a theoretical model that hypothesized relationships between several variables from input, process and product in an educational setting, from the university student's perspective, using structural equation modeling. In order to carry out the analysis, we measured in sequential order the input (referring to students' personal…
A Review of Energy Models with Particular Reference to Employment and Manpower Analysis.
ERIC Educational Resources Information Center
Eckstein, Albert J.; Heien, Dale M.
To analyze the application of quantitative models to energy-employment issues, the energy problem was viewed in three distinct, but related, phases: the post-embargo shock effects, the intermediate-term process of adjustment, and the long-run equilibrium. Against this background eighteen existing energy models (government supported as well as…
Blanchard, P C
2006-01-01
The air transportation of infectious materials is regulated by international air transport associations and based on United Nations Model regulations which have become more practical in addressing animal disease agents. However, individual countries' import and interstate requirements determine what materials can be imported and transported, and this approval process can be long, resulting in delays in organism confirmation, use of international OIE and other reference laboratories, and acquisition of reference materials, proficiency test panels, and reagents for performing necessary testing. Delays can be prevented for permits that are required for the routine work performed by a laboratory through the use of comprehensive and annually renewed permits. This process, however, does not address new and exotic agents where time is critical to an effective emergency response. This paper suggests actions by both the OIE and regulatory authorities which can assist in streamlining and expediting the permit process.
NASA Astrophysics Data System (ADS)
Bobojć, Andrzej
2016-12-01
This work contains a comparative study of the performance of six geopotential models in an orbit estimation process of the satellite of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) mission. For testing, such models as ULUX_CHAMP2013S, ITG-GRACE 2010S, EIGEN-51C, EIGEN5S, EGM2008, EGM96, were adopted. Different sets of pseudo-range simulations along reference GOCE satellite orbital arcs were obtained using real orbits of the Global Positioning System satellites. These sets were the basic observation data used in the adjustment. The centimeter-accuracy Precise Science Orbit (PSO) for the GOCE satellite provided by the European Space Agency (ESA) was adopted as the GOCE reference orbit. Comparing various variants of the orbital solutions, the relative accuracy of geopotential models in an orbital aspect is determined. Full geopotential models were used in the adjustment process. The solutions were also determined taking into account truncated geopotential models. In such case, an accuracy of the solutions was slightly enhanced. Different arc lengths were taken for the computation.
NASA Astrophysics Data System (ADS)
Shahbari, Juhaina Awawdeh
2018-07-01
The current study examines whether the engagement of mathematics teachers in modelling activities and subsequent changes in their conceptions about these activities affect their beliefs about mathematics. The sample comprised 52 mathematics teachers working in small groups in four modelling activities. The data were collected from teachers' Reports about features of each activity, interviews and questionnaires on teachers' beliefs about mathematics. The findings indicated changes in teachers' conceptions about the modelling activities. Most teachers referred to the first activity as a mathematical problem but emphasized only the mathematical notions or the mathematical operations in the modelling process; changes in their conceptions were gradual. Most of the teachers referred to the fourth activity as a mathematical problem and emphasized features of the whole modelling process. The results of the interviews indicated that changes in the teachers' conceptions can be attributed to structure of the activities, group discussions, solution paths and elicited models. These changes about modelling activities were reflected in teachers' beliefs about mathematics. The quantitative findings indicated that the teachers developed more constructive beliefs about mathematics after engagement in the modelling activities and that the difference was significant, however there was no significant difference regarding changes in their traditional beliefs.
An incompressible fluid flow model with mutual information for MR image registration
NASA Astrophysics Data System (ADS)
Tsai, Leo; Chang, Herng-Hua
2013-03-01
Image registration is one of the fundamental and essential tasks within image processing. It is a process of determining the correspondence between structures in two images, which are called the template image and the reference image, respectively. The challenge of registration is to find an optimal geometric transformation between corresponding image data. This paper develops a new MR image registration algorithm that uses a closed incompressible viscous fluid model associated with mutual information. In our approach, we treat the image pixels as the fluid elements of a viscous fluid flow governed by the nonlinear Navier-Stokes partial differential equation (PDE). We replace the pressure term with the body force mainly used to guide the transformation with a weighting coefficient, which is expressed by the mutual information between the template and reference images. To solve this modified Navier-Stokes PDE, we adopted the fast numerical techniques proposed by Seibold1. The registration process of updating the body force, the velocity and deformation fields is repeated until the mutual information weight reaches a prescribed threshold. We applied our approach to the BrainWeb and real MR images. As consistent with the theory of the proposed fluid model, we found that our method accurately transformed the template images into the reference images based on the intensity flow. Experimental results indicate that our method is of potential in a wide variety of medical image registration applications.
NASA Astrophysics Data System (ADS)
Brachet, N.; Mialle, P.; Brown, D.; Coyne, J.; Drob, D.; Virieux, J.; Garcés, M.
2009-04-01
The International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty (CTBTO) Preparatory Commission in Vienna is pursuing its automatic processing effort for the return of infrasound data processing into operations in 2009. Concurrently, work is also underway to further improve this process by enhancing the modeling of the infrasound propagation in the atmosphere and then by labeling the phases in order to improve the event categorization and location. In 2008, the IDC acquired WASP-3D Sph (Windy Atmospheric Sonic Propagation) (Virieux et al., 2004) a 3-D ray-tracing based long range propagation software that accounts for the heterogeneity of the atmosphere. Once adapted to the IDC environment, WASP-3 Sph has been used to improve the understanding of infrasound wave propagation and has been compared with the 1-D ray tracing Taupc software (Garcés and Drob, 2007) at the IDC. In addition to performing the infrasound propagation simulation, different atmospheric models are available at the IDC, either real-time: ECMWF (European Centre for Middle-range Weather Forecast), or empiric: HWM93 (Horizontal Wind Model) and HWM07 (Drob, 2008), used in their initial format or interpolated into G2S (Ground to Space) model. The IDC infrasound reference database is used for testing, comparing and validating the various propagation software and atmospheric specifications. Moreover all the performed simulations are giving feedback on the quality of the infrasound reference events and provide useful information to improve their location by refining infrasonic wave propagation characteristics. The results of this study are presented for a selection of reference events and they will help the IDC designing and defining short and mid-term enhancements of the infrasound automatic and interactive processing to take into account the spatial and temporal heterogeneities of the atmosphere.
First steps of processing VLBI data of space probes with VieVS
NASA Astrophysics Data System (ADS)
Plank, L.; Böhm, J.; Schuh, H.
2011-07-01
Since 2008 the VLBI group at the Institute of Geodesy and Geophysics (IGG) of the Vienna University of Technology has developed the Vienna VLBI Software VieVS which is capable to process geodetic VLBI data in NGS format. Constantly we are working on upgrading the new software, e.g. by developing a scheduling tool or extending the software from single session solution to a so-called global solution, allowing the joint analysis of many sessions covering several years. In this presentation we report on first steps to enable the processing of space VLBI data with the software. Driven by the recently increasing number of space VLBI applications, our goal is the geodetic usage of such data, primarily concerning frame ties between various reference frames, e. g. by connecting the dynamic reference frame of a space probe with the kinematically defined International Celestial Reference Frame (ICRF). Main parts of the software extension w.r.t. the existing VieVS are the treatment of fast moving targets, the implementation of a delay model for radio emitters at finite distances, and the adequate mathematical model and adjustment of the particular unknowns. Actual work has been done for two mission scenarios so far: On the one hand differential VLBI (D-VLBI) data from the two sub-satellites of the Japanese lunar mission Selene were processed, on the other hand VLBI observations of GNSS satellites were modelled in VieVS. Besides some general aspects, we give details on the calculation of the theoretical delay (delay model for moving sources at finite distances) and its realization in VieVS. First results with real data and comparisons with best fit mission orbit data are also presented.'
Creating Royal Australian Navy Standard Operating Procedures using Flow Diagrams
2015-08-01
DST-Group-TR-3137 UNCLASSIFIED Acronyms 4TQ 4TQ Toolkit ABR Australian Book of Reference ADF Australian Defence Force BPMN Business...steps to perform the activity. Object Management Group’s (OMG) Business Process Model and Notation ( BPMN ) [10] is becoming the standard to use when...Department of Defence 10. Object Management Group, Business Process Model and Notation ( BPMN ), version 2.0. 2011, Object Management Group: http
Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H
1999-01-01
GALEN has developed a new generation of terminology tools based on a language independent concept reference model using a compositional formalism allowing computer processing and multiple reuses. During the 4th framework program project Galen-In-Use we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures (CCAM) in France. On one hand we contributed to a language independent knowledge repository for multicultural Europe. On the other hand we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW to process French professional medical language rubrics produced by the national colleges of surgeons into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation on one hand we generate controlled French natural language to support the finalization of the linguistic labels in relation with the meanings of the conceptual system structure. On the other hand the classification manager of third generation proves to be very powerful to retrieve the initial professional rubrics with different categories of concepts within a semantic network.
Model-based local density sharpening of cryo-EM maps
Jakobi, Arjen J; Wilmanns, Matthias
2017-01-01
Atomic models based on high-resolution density maps are the ultimate result of the cryo-EM structure determination process. Here, we introduce a general procedure for local sharpening of cryo-EM density maps based on prior knowledge of an atomic reference structure. The procedure optimizes contrast of cryo-EM densities by amplitude scaling against the radially averaged local falloff estimated from a windowed reference model. By testing the procedure using six cryo-EM structures of TRPV1, β-galactosidase, γ-secretase, ribosome-EF-Tu complex, 20S proteasome and RNA polymerase III, we illustrate how local sharpening can increase interpretability of density maps in particular in cases of resolution variation and facilitates model building and atomic model refinement. PMID:29058676
DSLM Instructional Approach to Conceptual Change Involving Thermal Expansion.
ERIC Educational Resources Information Center
She, Hsiao-Ching
2003-01-01
Examines the process of student conceptual change regarding thermal expansion using the Dual Situated Learning Model (DSLM) as an instructional approach. Indicates that DSLM promotes conceptual change and holds great potential to facilitate the process through classroom instruction at all levels. (Contains 38 references.) (Author/NB)
Process Correlation Analysis Model for Process Improvement Identification
Park, Sooyong
2014-01-01
Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data. PMID:24977170
Process correlation analysis model for process improvement identification.
Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong
2014-01-01
Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.
Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths
NASA Technical Reports Server (NTRS)
Justh, Hilary L.; Justus, Carl G.; Badger, Andrew M.
2010-01-01
The poster provides an overview of techniques to improve the Mars Global Reference Atmospheric Model (Mars-GRAM) sensitivity. It has been discovered during the Mars Science Laboratory (MSL) site selection process that the Mars Global Reference Atmospheric Model (Mars-GRAM) when used for sensitivity studies for TES MapYear = 0 and large optical depth values such as tau = 3 is less than realistic. A preliminary fix has been made to Mars-GRAM by adding a density factor value that was determined for tau = 0.3, 1 and 3.
Reference Model for Project Support Environments Version 1.0
1993-02-28
relationship with the framework’s Process Support services and with the Lifecycle Process Engineering services. Examples: "* ORCA (Object-based...Design services. Examples: "* ORCA (Object-based Requirements Capture and Analysis). "* RETRAC (REquirements TRACeability). 4.3 Life-Cycle Process...34traditional" computer tools. Operations: Examples of audio and video processing operations include: "* Create, modify, and delete sound and video data
Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.
Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale
2016-08-01
Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty. © The Author(s) 2016.
Dawn Orbit Determination Team: Trajectory Modeling and Reconstruction Processes at Vesta
NASA Technical Reports Server (NTRS)
Abrahamson, Matthew J.; Ardito, Alessandro; Han, Dongsuk; Haw, Robert; Kennedy, Brian; Mastrodemos, Nick; Nandi, Sumita; Park, Ryan; Rush, Brian; Vaughan, Andrew
2013-01-01
The Dawn spacecraft spent over a year in orbit around Vesta from July 2011 through August 2012. In order to maintain the designated science reference orbits and enable the transfers between those orbits, precise and timely orbit determination was required. Challenges included low-thrust ion propulsion modeling, estimation of relatively unknown Vesta gravity and rotation models, track-ing data limitations, incorporation of real-time telemetry into dynamics model updates, and rapid maneuver design cycles during transfers. This paper discusses the dynamics models, filter configuration, and data processing implemented to deliver a rapid orbit determination capability to the Dawn project.
A new scoring method for evaluating the performance of earthquake forecasts and predictions
NASA Astrophysics Data System (ADS)
Zhuang, J.
2009-12-01
This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
The family living the child recovery process after hospital discharge.
Pinto, Júlia Peres; Mandetta, Myriam Aparecida; Ribeiro, Circéa Amalia
2015-01-01
to understand the meaning attributed by the family to its experience in the recovery process of a child affected by an acute disease after discharge, and to develop a theoretical model of this experience. Symbolic interactionism was adopted as a theoretical reference, and grounded theory was adopted as a methodological reference. data were collected through interviews and participant observation with 11 families, totaling 15 interviews. A theoretical model consisting of two interactive phenomena was formulated from the analysis: Mobilizing to restore functional balance and Suffering from the possibility of a child's readmission. the family remains alert to identify early changes in the child's health, in an attempt to avoid rehospitalization. the effects of the disease and hospitalization continue to manifest in family functioning, causing suffering even after the child's discharge and recovery.
de Lusignan, Simon; Cashman, Josephine; Poh, Norman; Michalakidis, Georgios; Mason, Aaron; Desombre, Terry; Krause, Paul
2012-01-01
Medical research increasingly requires the linkage of data from different sources. Conducting a requirements analysis for a new application is an established part of software engineering, but rarely reported in the biomedical literature; and no generic approaches have been published as to how to link heterogeneous health data. Literature review, followed by a consensus process to define how requirements for research, using, multiple data sources might be modeled. We have developed a requirements analysis: i-ScheDULEs - The first components of the modeling process are indexing and create a rich picture of the research study. Secondly, we developed a series of reference models of progressive complexity: Data flow diagrams (DFD) to define data requirements; unified modeling language (UML) use case diagrams to capture study specific and governance requirements; and finally, business process models, using business process modeling notation (BPMN). These requirements and their associated models should become part of research study protocols.
Boguslawski, Katharina; Tecmer, Paweł
2017-12-12
Wave functions restricted to electron-pair states are promising models to describe static/nondynamic electron correlation effects encountered, for instance, in bond-dissociation processes and transition-metal and actinide chemistry. To reach spectroscopic accuracy, however, the missing dynamic electron correlation effects that cannot be described by electron-pair states need to be included a posteriori. In this Article, we extend the previously presented perturbation theory models with an Antisymmetric Product of 1-reference orbital Geminal (AP1roG) reference function that allows us to describe both static/nondynamic and dynamic electron correlation effects. Specifically, our perturbation theory models combine a diagonal and off-diagonal zero-order Hamiltonian, a single-reference and multireference dual state, and different excitation operators used to construct the projection manifold. We benchmark all proposed models as well as an a posteriori Linearized Coupled Cluster correction on top of AP1roG against CR-CC(2,3) reference data for reaction energies of several closed-shell molecules that are extrapolated to the basis set limit. Moreover, we test the performance of our new methods for multiple bond breaking processes in the homonuclear N 2 , C 2 , and F 2 dimers as well as the heteronuclear BN, CO, and CN + dimers against MRCI-SD, MRCI-SD+Q, and CR-CC(2,3) reference data. Our numerical results indicate that the best performance is obtained from a Linearized Coupled Cluster correction as well as second-order perturbation theory corrections employing a diagonal and off-diagonal zero-order Hamiltonian and a single-determinant dual state. These dynamic corrections on top of AP1roG provide substantial improvements for binding energies and spectroscopic properties obtained with the AP1roG approach, while allowing us to approach chemical accuracy for reaction energies involving closed-shell species.
ERIC Educational Resources Information Center
Ayres, Marie-Louise; Kilner, Kerry; Fitch, Kent; Scarvell, Annette
This paper discusses the first major implementation of two significant new cataloging models: IFLA's FRBR (International Federation of Library Associations' Functional Requirements for Bibliographic Records) and event modeling (INDECS and Harmony). The paper refers briefly to the decision making processes leading to the adoption of these models,…
Book Selection, Collection Development, and Bounded Rationality.
ERIC Educational Resources Information Center
Schwartz, Charles A.
1989-01-01
Reviews previously proposed schemes of classical rationality in book selection, describes new approaches to rational choice behavior, and presents a model of book selection based on bounded rationality in a garbage can decision process. The role of tacit knowledge and symbolic content in the selection process are also discussed. (102 references)…
Writing for Professional Publication: Three Road Signs for Writing Success
ERIC Educational Resources Information Center
Buttery, Thomas J.
2010-01-01
In the first edition of Writing for Publication: An Organizational Paradigm (Buttery, 2010), I recommend a model for organizing theoretical articles. The process includes seven components: title, introduction, outline/advanced organizer, headings, transitions, summary and references. This article will focus on the writing process. The strands of…
Applying AI to the Writer's Learning Environment.
ERIC Educational Resources Information Center
Houlette, Forrest
1991-01-01
Discussion of current applications of artificial intelligence (AI) to writing focuses on how to represent knowledge of the writing process in a way that links procedural knowledge to other types of knowledge. A model is proposed that integrates the subtasks of writing into the process of writing itself. (15 references) (LRW)
Products and Processes: Synergistic Relationships
ERIC Educational Resources Information Center
Wallace, Virginia; Husid, Whitney
2013-01-01
Most people agree that products are the culmination of what students have studied. For this article, "product" will refer to students' abilities to create outcomes and design artifacts. Those abilities are guided by four processes: inquiry-based learning, use of a research model, use of Web 2.0 tools, and appropriate assessments.…
The Relationship between Simultaneous-Successive Processing and Academic Achievement.
ERIC Educational Resources Information Center
Merritt, Frank M.; McCallum, Steve
The Luria-Das Information Processing Model of human learning holds that information is analysed and coded within the brain in either a simultaneous or a successive fashion. Simultaneous integration refers to the synthesis of separate elements into groups, often with spatial characteristics; successive integration means that information is…
Attachment and the Processing of Social Information in Adolescence
ERIC Educational Resources Information Center
Dykas, Matthew J.; Cassidy, Jude
2007-01-01
A key proposition of attachment theory is that experience-based cognitive representations of attachment, often referred to as internal working models of attachment, influence the manner in which individuals process attachment-relevant social information (Bowlby, 1969/1982, 1973, 1980; Bretherton & Munholland, 1999; Main, Kaplan, & Cassidy, 1985).…
NASA Astrophysics Data System (ADS)
Zhang, Yufeng; Long, Man; Luo, Sida; Bao, Yu; Shen, Hanxia
2015-12-01
Transit route choice model is the key technology of public transit systems planning and management. Traditional route choice models are mostly based on expected utility theory which has an evident shortcoming that it cannot accurately portray travelers' subjective route choice behavior for their risk preferences are not taken into consideration. Cumulative prospect theory (CPT), a brand new theory, can be used to describe travelers' decision-making process under the condition of uncertainty of transit supply and risk preferences of multi-type travelers. The method to calibrate the reference point, a key parameter to CPT-based transit route choice model, determines the precision of the model to a great extent. In this paper, a new method is put forward to obtain the value of reference point which combines theoretical calculation and field investigation results. Comparing the proposed method with traditional method, it shows that the new method can promote the quality of CPT-based model by improving the accuracy in simulating travelers' route choice behaviors based on transit trip investigation from Nanjing City, China. The proposed method is of great significance to logical transit planning and management, and to some extent makes up the defect that obtaining the reference point is solely based on qualitative analysis.
Education for Business in Iowa. Curriculum and Reference Guide.
ERIC Educational Resources Information Center
University of Northern Iowa, Cedar Falls.
This business education curriculum model contains elementary, middle/junior high, and high school business education courses for Iowa students in the following areas: accounting, basic business, information processing, marketing, and general topics. A curriculum model provides specific courses for different educational levels. Each area contains…
Event boundaries and anaphoric reference.
Thompson, Alexis N; Radvansky, Gabriel A
2016-06-01
The current study explored the finding that parsing a narrative into separate events impairs anaphor resolution. According to the Event Horizon Model, when a narrative event boundary is encountered, a new event model is created. Information associated with the prior event model is removed from working memory. So long as the event model containing the anaphor referent is currently being processed, this information should still be available when there is no narrative event boundary, even if reading has been disrupted by a working-memory-clearing distractor task. In those cases, readers may reactivate their prior event model, and anaphor resolution would not be affected. Alternatively, comprehension may not be as event oriented as this account suggests. Instead, any disruption of the contents of working memory during comprehension, event related or not, may be sufficient to disrupt anaphor resolution. In this case, reading comprehension would be more strongly guided by other, more basic language processing mechanisms and the event structure of the described events would play a more minor role. In the current experiments, participants were given stories to read in which we included, between the anaphor and its referent, either the presence of a narrative event boundary (Experiment 1) or a narrative event boundary along with a working-memory-clearing distractor task (Experiment 2). The results showed that anaphor resolution was affected by narrative event boundaries but not by a working-memory-clearing distractor task. This is interpreted as being consistent with the Event Horizon Model of event cognition.
Examination of global correlations in ground deformation for terrestrial reference frame estimation
NASA Astrophysics Data System (ADS)
Chin, T. M.; Abbondanza, C.; Argus, D. F.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Wu, X.
2016-12-01
The KALman filter for REFerence frames (KALREF, Wu et al. 2015) has been developed to produce terrestrial reference frame (TRF) solutions. TRFs consist of precise position coordinates and velocity vectors of terrestrial reference sites (with the geocenter as the origin) along with the Earth orientation parameters, and they are produced by combining decades worth of space geodetic data using site tie data. To perform the combination, KALREF relies on stochastic models of the geophysical processes that are causing the Earth's surface to deform and reference sites to be displaced. We are investigating application of the GRACE data to improve the KALREF stochastic models by determining spatial statistics of the deformation of the Earth's surface caused by mass loading. A potential target of improvement is the non-uniform distribution of the geodetic observation sites, which can introduce bias in TRF estimates of the geocenter. The global and relatively uniform coverage of the GRACE measurements is expected to be free of such bias and allow us to improve physical realism of the stochastic model. For such a goal, we examine the spatial correlations in ground deformation derived from several GRACE data sets.[Wu et al. 2015: Journal of Geophysical Research (Solid Earth) 120:3775-3802
Jiao, Yong; Zhang, Yu; Wang, Yu; Wang, Bei; Jin, Jing; Wang, Xingyu
2018-05-01
Multiset canonical correlation analysis (MsetCCA) has been successfully applied to optimize the reference signals by extracting common features from multiple sets of electroencephalogram (EEG) for steady-state visual evoked potential (SSVEP) recognition in brain-computer interface application. To avoid extracting the possible noise components as common features, this study proposes a sophisticated extension of MsetCCA, called multilayer correlation maximization (MCM) model for further improving SSVEP recognition accuracy. MCM combines advantages of both CCA and MsetCCA by carrying out three layers of correlation maximization processes. The first layer is to extract the stimulus frequency-related information in using CCA between EEG samples and sine-cosine reference signals. The second layer is to learn reference signals by extracting the common features with MsetCCA. The third layer is to re-optimize the reference signals set in using CCA with sine-cosine reference signals again. Experimental study is implemented to validate effectiveness of the proposed MCM model in comparison with the standard CCA and MsetCCA algorithms. Superior performance of MCM demonstrates its promising potential for the development of an improved SSVEP-based brain-computer interface.
Quantifying Local, Response Dependence between Two Polytomous Items Using the Rasch Model
ERIC Educational Resources Information Center
Andrich, David; Humphry, Stephen M.; Marais, Ida
2012-01-01
Models of modern test theory imply statistical independence among responses, generally referred to as "local independence." One violation of local independence occurs when the response to one item governs the response to a subsequent item. Expanding on a formulation of this kind of violation as a process in the dichotomous Rasch model,…
Animal movement: Statistical models for telemetry data
Hooten, Mevin B.; Johnson, Devin S.; McClintock, Brett T.; Morales, Juan M.
2017-01-01
The study of animal movement has always been a key element in ecological science, because it is inherently linked to critical processes that scale from individuals to populations and communities to ecosystems. Rapid improvements in biotelemetry data collection and processing technology have given rise to a variety of statistical methods for characterizing animal movement. The book serves as a comprehensive reference for the types of statistical models used to study individual-based animal movement.
NASA Astrophysics Data System (ADS)
Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong
2013-04-01
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hao; Ren, Shangping; Garzoglio, Gabriele
Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overheadmore » is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.« less
Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations
NASA Astrophysics Data System (ADS)
Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad
2012-11-01
This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.
[Nursing care systematization in rehabilitation unit, in accordance to Horta's conceptual model].
Neves, Rinaldo de Souza
2006-01-01
The utilization of a conceptual model in the Nursing Attendance Systemization allows the development of activities based on theoretical references that can guide the implantation and the implementation of nursing proceedings in hospitals. In this article we examine the option made for the implementation of the Horta's conceptual model in the construction of a nursing attendance system in the Rehabilitation Unit of a public hospital located in the Federal District of Brazil. Through the utilization of these theoretical references it was possible to make available a data collection tool based on the basic human needs. The identification of these needs made possible the construction of the hierarchically disposed pyramid of the neurological patients' modified basic needs. Through this reference paper we intend to elaborate the prescription and nursing evolution based in the concepts and standards of the Horta's nursing process, making possible the inter-relationship of all phases of this attendance methodology.
Using explanatory crop models to develop simple tools for Advanced Life Support system studies
NASA Technical Reports Server (NTRS)
Cavazzoni, J.
2004-01-01
System-level analyses for Advanced Life Support require mathematical models for various processes, such as for biomass production and waste management, which would ideally be integrated into overall system models. Explanatory models (also referred to as mechanistic or process models) would provide the basis for a more robust system model, as these would be based on an understanding of specific processes. However, implementing such models at the system level may not always be practicable because of their complexity. For the area of biomass production, explanatory models were used to generate parameters and multivariable polynomial equations for basic models that are suitable for estimating the direction and magnitude of daily changes in canopy gas-exchange, harvest index, and production scheduling for both nominal and off-nominal growing conditions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Acharya, S.; Kaplan, D. A.; Casey, S.; Cohen, M. J.; Jawitz, J. W.
2015-05-01
Self-organized landscape patterning can arise in response to multiple processes. Discriminating among alternative patterning mechanisms, particularly where experimental manipulations are untenable, requires process-based models. Previous modeling studies have attributed patterning in the Everglades (Florida, USA) to sediment redistribution and anisotropic soil hydraulic properties. In this work, we tested an alternate theory, the self-organizing-canal (SOC) hypothesis, by developing a cellular automata model that simulates pattern evolution via local positive feedbacks (i.e., facilitation) coupled with a global negative feedback based on hydrology. The model is forced by global hydroperiod that drives stochastic transitions between two patch types: ridge (higher elevation) and slough (lower elevation). We evaluated model performance using multiple criteria based on six statistical and geostatistical properties observed in reference portions of the Everglades landscape: patch density, patch anisotropy, semivariogram ranges, power-law scaling of ridge areas, perimeter area fractal dimension, and characteristic pattern wavelength. Model results showed strong statistical agreement with reference landscapes, but only when anisotropically acting local facilitation was coupled with hydrologic global feedback, for which several plausible mechanisms exist. Critically, the model correctly generated fractal landscapes that had no characteristic pattern wavelength, supporting the invocation of global rather than scale-specific negative feedbacks.
NASA Astrophysics Data System (ADS)
Acharya, S.; Kaplan, D. A.; Casey, S.; Cohen, M. J.; Jawitz, J. W.
2015-01-01
Self-organized landscape patterning can arise in response to multiple processes. Discriminating among alternative patterning mechanisms, particularly where experimental manipulations are untenable, requires process-based models. Previous modeling studies have attributed patterning in the Everglades (Florida, USA) to sediment redistribution and anisotropic soil hydraulic properties. In this work, we tested an alternate theory, the self-organizing canal (SOC) hypothesis, by developing a cellular automata model that simulates pattern evolution via local positive feedbacks (i.e., facilitation) coupled with a global negative feedback based on hydrology. The model is forced by global hydroperiod that drives stochastic transitions between two patch types: ridge (higher elevation) and slough (lower elevation). We evaluated model performance using multiple criteria based on six statistical and geostatistical properties observed in reference portions of the Everglades landscape: patch density, patch anisotropy, semivariogram ranges, power-law scaling of ridge areas, perimeter area fractal dimension, and characteristic pattern wavelength. Model results showed strong statistical agreement with reference landscapes, but only when anisotropically acting local facilitation was coupled with hydrologic global feedback, for which several plausible mechanisms exist. Critically, the model correctly generated fractal landscapes that had no characteristic pattern wavelength, supporting the invocation of global rather than scale-specific negative feedbacks.
NASA Astrophysics Data System (ADS)
Dünser, Simon; Meyer, Daniel W.
2016-06-01
In most groundwater aquifers, dispersion of tracers is dominated by flow-field inhomogeneities resulting from the underlying heterogeneous conductivity or transmissivity field. This effect is referred to as macrodispersion. Since in practice, besides a few point measurements the complete conductivity field is virtually never available, a probabilistic treatment is needed. To quantify the uncertainty in tracer concentrations from a given geostatistical model for the conductivity, Monte Carlo (MC) simulation is typically used. To avoid the excessive computational costs of MC, the polar Markovian velocity process (PMVP) model was recently introduced delivering predictions at about three orders of magnitude smaller computing times. In artificial test cases, the PMVP model has provided good results in comparison with MC. In this study, we further validate the model in a more challenging and realistic setup. The setup considered is derived from the well-known benchmark macrodispersion experiment (MADE), which is highly heterogeneous and non-stationary with a large number of unevenly scattered conductivity measurements. Validations were done against reference MC and good overall agreement was found. Moreover, simulations of a simplified setup with a single measurement were conducted in order to reassess the model's most fundamental assumptions and to provide guidance for model improvements.
Evaluating Innovation and Navigating Unseen Boundaries: Systems, Processes and People
ERIC Educational Resources Information Center
Fleet, Alma; De Gioia, Katey; Madden, Lorraine; Semann, Anthony
2018-01-01
This paper illustrates an evaluation model emerging from Australian research. With reference to a range of contexts, its usefulness is demonstrated through application to two professional development initiatives designed to improve continuity of learning in the context of the transition to school. The model reconceptualises approaches to…
Using the Gamma-Poisson Model to Predict Library Circulations.
ERIC Educational Resources Information Center
Burrell, Quentin L.
1990-01-01
Argues that the gamma mixture of Poisson processes, for all its perceived defects, can be used to make predictions regarding future library book circulations of a quality adequate for general management requirements. The use of the model is extensively illustrated with data from two academic libraries. (Nine references) (CLB)
The Model of Career Anchors as a Tool in the Analysis of Instructional Developers.
ERIC Educational Resources Information Center
Miller, Carol
1981-01-01
Examines the importance of human systems as a relevant aspect of development processes and looks at the career anchor model proposed by Schein as a possible area in the analysis of the instructional developer/client relationships. Fourteen references are listed. (Author/LLS)
The Comprehension and Validation of Social Information.
ERIC Educational Resources Information Center
Wyer, Robert S., Jr.; Radvansky, Gabriel A.
1999-01-01
Proposes a theory of social cognition to account for the comprehension and verification of social information. The theory views comprehension as a process of constructing situation models of new information on the basis of previously formed models about its referents. The comprehension of both single statements and multiple pieces of information…
Use of a Behavioral Contract with Resident Assistants: Prelude to a Health Fair.
ERIC Educational Resources Information Center
Morgan, John D.; Hyner, Gerald C.
1984-01-01
Presents a conceptual model which focuses on goals of student government in residence halls. The model has two fundamental processes. One focuses on short term goals and activities catering to creation of environment. The second has long term effects referred to as lifelong personal development. (JAC)
Implementation of a School-Wide Approach to Critical Thinking Instruction.
ERIC Educational Resources Information Center
Kassem, Cherrie L.
2000-01-01
To improve students' critical-thinking skills, an interdisciplinary team of educators collaborated with a specialist. The result: a new model for infusing thinking-skills instruction. This paper describes the change process, the CRTA model's evolution, derivation of its acronym, and early qualitative results. (Contains 31 references.) (MLH)
ERIC Educational Resources Information Center
Lane, Julia
2012-01-01
This paper suggests a model of embodied environmental education grounded in participant interviews, fieldwork, scholarly literature, and the author's own embodied relationship with the natural world. In this article, embodiment refers to a process that stems from Indigenous Knowledges and theatre. Although Indigenous Knowledges and theatre…
Self-Directed Learning in the Process of Work: Conceptual Considerations--Empirical Evidences.
ERIC Educational Resources Information Center
Straka, Gerald A.; Schaefer, Cornelia
With reference to the literature on adult self-directed learning, a model termed the "Two-Shell Model of Motivated Self-Directed Learning" was formulated that differentiates sociohistorical environmental conditions, internal conditions, and activities related to four concepts (interest, learning strategies, control, and evaluation). The…
Broughton, Heather M; Govender, Danny; Shikwambana, Purvance; Chappell, Patrick; Jolles, Anna
2017-06-01
The International Species Information System has set forth an extensive database of reference intervals for zoologic species, allowing veterinarians and game park officials to distinguish normal health parameters from underlying disease processes in captive wildlife. However, several recent studies comparing reference values from captive and free-ranging animals have found significant variation between populations, necessitating the development of separate reference intervals in free-ranging wildlife to aid in the interpretation of health data. Thus, this study characterizes reference intervals for six biochemical analytes, eleven hematologic or immune parameters, and three hormones using samples from 219 free-ranging African lions ( Panthera leo ) captured in Kruger National Park, South Africa. Using the original sample population, exclusion criteria based on physical examination were applied to yield a final reference population of 52 clinically normal lions. Reference intervals were then generated via 90% confidence intervals on log-transformed data using parametric bootstrapping techniques. In addition to the generation of reference intervals, linear mixed-effect models and generalized linear mixed-effect models were used to model associations of each focal parameter with the following independent variables: age, sex, and body condition score. Age and sex were statistically significant drivers for changes in hepatic enzymes, renal values, hematologic parameters, and leptin, a hormone related to body fat stores. Body condition was positively correlated with changes in monocyte counts. Given the large variation in reference values taken from captive versus free-ranging lions, it is our hope that this study will serve as a baseline for future clinical evaluations and biomedical research targeting free-ranging African lions.
ANSI/ASHRAE/IES Standard 90.1-2010 Performance Rating Method Reference Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goel, Supriya; Rosenberg, Michael I.
This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1- 2010 (Standard 90.1-2010).The PRM is used for rating the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users of the PRM. It should be noted that this document is created independently from ASHRAE and SSPC 90.1 and is not sanctioned nor approved by either ofmore » those entities . Potential users of this manual include energy modelers, software developers and implementers of “beyond code” energy programs. Energy modelers using ASHRAE Standard 90.1-2010 for beyond code programs can use this document as a reference manual for interpreting requirements of the Performance Rating method. Software developers, developing tools for automated creation of the baseline model can use this reference manual as a guideline for developing the rules for the baseline model.« less
NASA Astrophysics Data System (ADS)
Casadei, D.
2014-10-01
The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.
A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems
NASA Astrophysics Data System (ADS)
Pawlicki, Ted
1988-03-01
Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions of hierarchical indexing. (i.e. the specificity, adjunct, and parent indices) It supports the notion that multiple canonical views of an object may have to be stored in memory to enable its efficient identification. The use of variable fields in the state space vectors appears to keep the number of required nodes in the network down to a tractable number while imposing a semantic value on different areas of the state space. This semantic imposition supports an interface between the analogical aspects of neural networks and the propositional paradigms of symbolic processing.
ARMA models for earthquake ground motions. Seismic safety margins research program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.
1981-02-01
Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less
Roux-Rouquié, Magali; Caritey, Nicolas; Gaubert, Laurent; Rosenthal-Sabroux, Camille
2004-07-01
One of the main issues in Systems Biology is to deal with semantic data integration. Previously, we examined the requirements for a reference conceptual model to guide semantic integration based on the systemic principles. In the present paper, we examine the usefulness of the Unified Modelling Language (UML) to describe and specify biological systems and processes. This makes unambiguous representations of biological systems, which would be suitable for translation into mathematical and computational formalisms, enabling analysis, simulation and prediction of these systems behaviours.
FRAP Analysis: Accounting for Bleaching during Image Capture
Wu, Jun; Shekhar, Nandini; Lele, Pushkar P.; Lele, Tanmay P.
2012-01-01
The analysis of Fluorescence Recovery After Photobleaching (FRAP) experiments involves mathematical modeling of the fluorescence recovery process. An important feature of FRAP experiments that tends to be ignored in the modeling is that there can be a significant loss of fluorescence due to bleaching during image capture. In this paper, we explicitly include the effects of bleaching during image capture in the model for the recovery process, instead of correcting for the effects of bleaching using reference measurements. Using experimental examples, we demonstrate the usefulness of such an approach in FRAP analysis. PMID:22912750
Solid waste projection model: Model version 1. 0 technical reference manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilkins, M.L.; Crow, V.L.; Buska, D.E.
1990-11-01
The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software utilized in developing Version 1.0 of the modeling unit of SWPM. This document is intended for use by experienced software engineers and supports programming, code maintenance, and model enhancement. Those interested in using SWPM should refer to the SWPM Modelmore » User's Guide. This document is available from either the PNL project manager (D. L. Stiles, 509-376-4154) or the WHC program monitor (B. C. Anderson, 509-373-2796). 8 figs.« less
An electronic system for measuring thermophysical properties of wind tunnel models
NASA Technical Reports Server (NTRS)
Corwin, R. R.; Kramer, J. S.
1975-01-01
An electronic system is described which measures the surface temperature of a small portion of the surface of the model or sample at high speeds using an infrared radiometer. This data is processed along with heating rate data from the reference heat gauge in a small computer and prints out the desired thermophysical properties, time, surface temperature, and reference heat rate. This system allows fast and accurate property measurements over thirty temperature increments. The technique, the details of the apparatus, the procedure for making these measurements, and the results of some preliminary tests are presented.
A Successful Creative Process: The Role of Passion and Emotions
ERIC Educational Resources Information Center
St-Louis, Ariane C.; Vallerand, Robert J.
2015-01-01
The creative process refers a sequence of thoughts and actions leading to a novel, adaptive production (Lubart, 2000). It demands love, time, and devotion; therefore, creators are passionate toward their creative work. The Dualistic Model of Passion (Vallerand et al., 2003) defines passion as a strong inclination for a self-defining activity that…
A Model for Logistics Systems Engineering Management Education in Europe.
ERIC Educational Resources Information Center
Naim, M.; Lalwani, C.; Fortuin, L.; Schmidt, T.; Taylor, J.; Aronsson, H.
2000-01-01
Presents the need for a systems and process perspective of logistics, and develops a template for a logistics education course. The template addresses functional, process, and supply chain needs and was developed by a number of university partners with core skills in different traditional disciplines. (Contains 31 references.) (Author/WRM)
NASA Astrophysics Data System (ADS)
Gektin, Yu. M.; Egoshkin, N. A.; Eremeev, V. V.; Kuznecov, A. E.; Moskatinyev, I. V.; Smelyanskiy, M. B.
2017-12-01
A set of standardized models and algorithms for geometric normalization and georeferencing images from geostationary and highly elliptical Earth observation systems is considered. The algorithms can process information from modern scanning multispectral sensors with two-coordinate scanning and represent normalized images in optimal projection. Problems of the high-precision ground calibration of the imaging equipment using reference objects, as well as issues of the flight calibration and refinement of geometric models using the absolute and relative reference points, are considered. Practical testing of the models, algorithms, and technologies is performed in the calibration of sensors for spacecrafts of the Electro-L series and during the simulation of the Arktika prospective system.
Estimation of Atmospheric Methane Surface Fluxes Using a Global 3-D Chemical Transport Model
NASA Astrophysics Data System (ADS)
Chen, Y.; Prinn, R.
2003-12-01
Accurate determination of atmospheric methane surface fluxes is an important and challenging problem in global biogeochemical cycles. We use inverse modeling to estimate annual, seasonal, and interannual CH4 fluxes between 1996 and 2001. The fluxes include 7 time-varying seasonal (3 wetland, rice, and 3 biomass burning) and 3 steady aseasonal (animals/waste, coal, and gas) global processes. To simulate atmospheric methane, we use the 3-D chemical transport model MATCH driven by NCEP reanalyzed observed winds at a resolution of T42 ( ˜2.8° x 2.8° ) in the horizontal and 28 levels (1000 - 3 mb) in the vertical. By combining existing datasets of individual processes, we construct a reference emissions field that represents our prior guess of the total CH4 surface flux. For the methane sink, we use a prescribed, annually-repeating OH field scaled to fit methyl chloroform observations. MATCH is used to produce both the reference run from the reference emissions, and the time-dependent sensitivities that relate individual emission processes to observations. The observational data include CH4 time-series from ˜15 high-frequency (in-situ) and ˜50 low-frequency (flask) observing sites. Most of the high-frequency data, at a time resolution of 40-60 minutes, have not previously been used in global scale inversions. In the inversion, the high-frequency data generally have greater weight than the weekly flask data because they better define the observational monthly means. The Kalman Filter is used as the optimal inversion technique to solve for emissions between 1996-2001. At each step in the inversion, new monthly observations are utilized and new emissions estimates are produced. The optimized emissions represent deviations from the reference emissions that lead to a better fit to the observations. The seasonal processes are optimized for each month, and contain the methane seasonality and interannual variability. The aseasonal processes, which are less variable, are solved as constant emissions over the entire time period. The Kalman Filter also produces emission uncertainties which quantify the ability of the observing network to constrain different processes. The sensitivity of the inversion to different observing sites and model sampling strategies is also tested. In general, the inversion reduces coal and gas emissions, and increases rice and biomass burning emissions relative to the reference case. Increases in both tropical and northern wetland emissions are found to have dominated the strong atmospheric methane increase in 1998. Northern wetlands are the best constrained processes, while tropical regions are poorly constrained and will require additional observations in the future for significant uncertainty reduction. The results of this study also suggest that interannual varying transport like NCEP and high-frequency measurements should be used when solving for methane emissions at monthly time resolution. Better estimates of global OH fluctuations are also necessary to fully describe the interannual behavior of methane observations.
The Geochemical Earth Reference Model (GERM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staudigel, H.; Albarede, F.; Shaw, H.
The Geochemical Earth Reference Model (GERM) initiative is a grass- roots effort with the goal of establishing a community consensus on a chemical characterization of the Earth, its major reservoirs, and the fluxes between them. Long term goal of GERM is a chemical reservoir characterization analogous to the geophysical effort of the Preliminary Reference Earth Model (PREM). Chemical fluxes between reservoirs are included into GERM to illuminate the long-term chemical evolution of the Earth and to characterize the Earth as a dynamic chemical system. In turn, these fluxes control geological processes and influence hydrosphere-atmosphere-climate dynamics. While these long-term goals aremore » clearly the focus of GERM, the process of establishing GERM itself is just as important as its ultimate goal. The GERM initiative is developed in an open community discussion on the World Wide Web (GERM home page is at http://www-ep.es.llnl. gov/germ/germ-home.html) that is mediated by a series of editors with responsibilities for distinct reservoirs and fluxes. Beginning with the original workshop in Lyons (March 1996) GERM is continued to be developed on the Internet, punctuated by workshops and special sessions at professional meetings. It is planned to complete the first model by mid-1997, followed by a call for papers for a February 1998 GERM conference in La Jolla, California.« less
Dynamic updating atlas for heart segmentation with a nonlinear field-based model.
Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng
2017-09-01
Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.
Mobile platform for treatment of stroke: A case study of tele-assistance
Torres Zenteno, Arturo Henry; Fernández, Francisco; Palomino-García, Alfredo; Moniche, Francisco; Escudero, Irene; Jiménez-Hernández, M Dolores; Caballero, Auxiliadora; Escobar-Rodriguez, Germán; Parra, Carlos
2015-01-01
This article presents the technological solution of a tele-assistance process for stroke patients in acute phase in the Seville metropolitan area. The main objective of this process is to reduce time from symptom onset to treatment of acute phase stroke patients by means of telemedicine, regarding mobility between an intensive care unit ambulance and an expert center and activating the pre-hospital care phase. The technological platform covering the process has been defined following an interoperability model based on standards and with a focus on service-oriented architecture focus. Messaging definition has been designed according to the reference model of the CEN/ISO 13606, messages content follows the structure of archetypes. An XDS-b (Cross-Enterprise Document Sharing-b) transaction messaging has been designed according to Integrating the Healthcare Enterprise profile for archetype notifications and update enquiries.This research has been performed by a multidisciplinary group. The Virgen del Rocío University Hospital acts as Reference Hospital and the Public Company for Healthcare as mobility surroundings. PMID:25975806
Memory systems interaction in the pigeon: working and reference memory.
Roberts, William A; Strang, Caroline; Macpherson, Krista
2015-04-01
Pigeons' performance on a working memory task, symbolic delayed matching-to-sample, was used to examine the interaction between working memory and reference memory. Reference memory was established by training pigeons to discriminate between the comparison cues used in delayed matching as S+ and S- stimuli. Delayed matching retention tests then measured accuracy when working and reference memory were congruent and incongruent. In 4 experiments, it was shown that the interaction between working and reference memory is reciprocal: Strengthening either type of memory leads to a decrease in the influence of the other type of memory. A process dissociation procedure analysis of the data from Experiment 4 showed independence of working and reference memory, and a model of working memory and reference memory interaction was shown to predict the findings reported in the 4 experiments. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Ergodicity-breaking bifurcations and tunneling in hyperbolic transport models
NASA Astrophysics Data System (ADS)
Giona, M.; Brasiello, A.; Crescitelli, S.
2015-11-01
One of the main differences between parabolic transport, associated with Langevin equations driven by Wiener processes, and hyperbolic models related to generalized Kac equations driven by Poisson processes, is the occurrence in the latter of multiple stable invariant densities (Frobenius multiplicity) in certain regions of the parameter space. This phenomenon is associated with the occurrence in linear hyperbolic balance equations of a typical bifurcation, referred to as the ergodicity-breaking bifurcation, the properties of which are thoroughly analyzed.
From scenarios to domain models: processes and representations
NASA Astrophysics Data System (ADS)
Haddock, Gail; Harbison, Karan
1994-03-01
The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.
ERIC Educational Resources Information Center
Brown, Chris
2012-01-01
The phrase "knowledge adoption" refers to the often-complicated process by which policy makers "take on board" evidence. While models have been put forward to explain this activity, this paper argues that such models are flawed and fail to fully address those complexities affecting the successful realisation of knowledge…
Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network
NASA Astrophysics Data System (ADS)
Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu
2018-04-01
This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.
Cortical basis of communication: local computation, coordination, attention.
Alexandre, Frederic
2009-03-01
Human communication emerges from cortical processing, known to be implemented on a regular repetitive neuronal substratum. The supposed genericity of cortical processing has elicited a series of modeling works in computational neuroscience that underline the information flows driven by the cortical circuitry. In the minimalist framework underlying the current theories for the embodiment of cognition, such a generic cortical processing is exploited for the coordination of poles of representation, as is reported in this paper for the case of visual attention. Interestingly, this case emphasizes how abstract internal referents are built to conform to memory requirements. This paper proposes that these referents are the basis for communication in humans, which is firstly a coordination and an attentional procedure with regard to their congeners.
Romero, Nuria; Sanchez, Alvaro; Vazquez, Carmelo
2014-03-01
Cognitive models propose that depression is caused by dysfunctional schemas that endure beyond the depressive episode, representing vulnerability factors for recurrence. However, research testing negative cognitions linked to dysfunctional schemas in formerly depressed individuals is still scarce. Furthermore, negative cognitions are presumed to be linked to biases in recalling negative self-referent information in formerly depressed individuals, but no studies have directly tested this association. In the present study, we evaluated differences between formerly and never-depressed individuals in several experimental indices of negative cognitions and their associations with the recall of emotional self-referent material. Formerly (n = 30) and never depressed individuals (n = 40) completed measures of explicit (i.e., scrambled sentence test) and automatic (i.e., lexical decision task) processing to evaluate negative cognitions. Furthermore participants completed a self-referent incidental recall task to evaluate memory biases. Formerly compared to never depressed individuals showed greater negative cognitions at both explicit and automatic levels of processing. Results also showed greater recall of negative self-referent information in formerly compared to never-depressed individuals. Finally, individual differences in negative cognitions at both explicit and automatic levels of processing predicted greater recall of negative self-referent material in formerly depressed individuals. Analyses of the relationship between explicit and automatic processing indices and memory biases were correlational and the majority of participants in both groups were women. Our findings provide evidence of negative cognitions in formerly depressed individuals at both automatic and explicit levels of processing that may confer a cognitive vulnerability to depression. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modelling Of Flotation Processes By Classical Mathematical Methods - A Review
NASA Astrophysics Data System (ADS)
Jovanović, Ivana; Miljanović, Igor
2015-12-01
Flotation process modelling is not a simple task, mostly because of the process complexity, i.e. the presence of a large number of variables that (to a lesser or a greater extent) affect the final outcome of the mineral particles separation based on the differences in their surface properties. The attempts toward the development of the quantitative predictive model that would fully describe the operation of an industrial flotation plant started in the middle of past century and it lasts to this day. This paper gives a review of published research activities directed toward the development of flotation models based on the classical mathematical rules. The description and systematization of classical flotation models were performed according to the available references, with emphasize exclusively given to the flotation process modelling, regardless of the model application in a certain control system. In accordance with the contemporary considerations, models were classified as the empirical, probabilistic, kinetic and population balance types. Each model type is presented through the aspects of flotation modelling at the macro and micro process levels.
Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)
NASA Technical Reports Server (NTRS)
Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV
1988-01-01
The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.
Verification of the Skorohod-Olevsky Viscous Sintering (SOVS) Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian T.
2017-11-16
Sintering refers to a manufacturing process through which mechanically pressed bodies of ceramic (and sometimes metal) powders are heated to drive densification thereby removing the inherit porosity of green bodies. As the body densifies through the sintering process, the ensuing material flow leads to macroscopic deformations of the specimen and as such the final configuration differs form the initial. Therefore, as with any manufacturing step, there is substantial interest in understanding and being able to model the sintering process to predict deformation and residual stress. Efforts in this regard have been pursued for face seals, gear wheels, and consumer productsmore » like wash-basins. To understand the sintering process, a variety of modeling approaches have been pursued at different scales.« less
Han, Guangjie; Liu, Li; Jiang, Jinfang; Shu, Lei; Rodrigues, Joel J.P.C.
2016-01-01
Localization is one of the hottest research topics in Underwater Wireless Sensor Networks (UWSNs), since many important applications of UWSNs, e.g., event sensing, target tracking and monitoring, require location information of sensor nodes. Nowadays, a large number of localization algorithms have been proposed for UWSNs. How to improve location accuracy are well studied. However, few of them take location reliability or security into consideration. In this paper, we propose a Collaborative Secure Localization algorithm based on Trust model (CSLT) for UWSNs to ensure location security. Based on the trust model, the secure localization process can be divided into the following five sub-processes: trust evaluation of anchor nodes, initial localization of unknown nodes, trust evaluation of reference nodes, selection of reference node, and secondary localization of unknown node. Simulation results demonstrate that the proposed CSLT algorithm performs better than the compared related works in terms of location security, average localization accuracy and localization ratio. PMID:26891300
NASA Astrophysics Data System (ADS)
Beaudoin, Yanick; Desbiens, André; Gagnon, Eric; Landry, René
2018-01-01
The navigation system of a satellite launcher is of paramount importance. In order to correct the trajectory of the launcher, the position, velocity and attitude must be known with the best possible precision. In this paper, the observability of four navigation solutions is investigated. The first one is the INS/GPS couple. Then, attitude reference sensors, such as magnetometers, are added to the INS/GPS solution. The authors have already demonstrated that the reference trajectory could be used to improve the navigation performance. This approach is added to the two previously mentioned navigation systems. For each navigation solution, the observability is analyzed with different sensor error models. First, sensor biases are neglected. Then, sensor biases are modelled as random walks and as first order Markov processes. The observability is tested with the rank and condition number of the observability matrix, the time evolution of the covariance matrix and sensitivity to measurement outlier tests. The covariance matrix is exploited to evaluate the correlation between states in order to detect structural unobservability problems. Finally, when an unobservable subspace is detected, the result is verified with theoretical analysis of the navigation equations. The results show that evaluating only the observability of a model does not guarantee the ability of the aiding sensors to correct the INS estimates within the mission time. The analysis of the covariance matrix time evolution could be a powerful tool to detect this situation, however in some cases, the problem is only revealed with a sensitivity to measurement outlier test. None of the tested solutions provide GPS position bias observability. For the considered mission, the modelling of the sensor biases as random walks or Markov processes gives equivalent results. Relying on the reference trajectory can improve the precision of the roll estimates. But, in the context of a satellite launcher, the roll estimation error and gyroscope bias are only observable if attitude reference sensors are present.
Kaya, Emine Merve
2017-01-01
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information—a phenomenon referred to as the ‘cocktail party problem’. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by ‘bottom-up’ sensory-driven factors, as well as ‘top-down’ task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044012
Deriving video content type from HEVC bitstream semantics
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.
2014-05-01
As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can be used in a timely manner to aid decision making in QoE oriented adaptive real time streaming.
Gaussian Process Kalman Filter for Focal Plane Wavefront Correction and Exoplanet Signal Extraction
NASA Astrophysics Data System (ADS)
Sun, He; Kasdin, N. Jeremy
2018-01-01
Currently, the ultimate limitation of space-based coronagraphy is the ability to subtract the residual PSF after wavefront correction to reveal the planet. Called reference difference imaging (RDI), the technique consists of conducting wavefront control to collect the reference point spread function (PSF) by observing a bright star, and then extracting target planet signals by subtracting a weighted sum of reference PSFs. Unfortunately, this technique is inherently inefficient because it spends a significant fraction of the observing time on the reference star rather than the target star with the planet. Recent progress in model based wavefront estimation suggests an alternative approach. A Kalman filter can be used to estimate the stellar PSF for correction by the wavefront control system while simultaneously estimating the planet signal. Without observing the reference star, the (extended) Kalman filter directly utilizes the wavefront correction data and combines the time series observations and model predictions to estimate the stellar PSF and planet signals. Because wavefront correction is used during the entire observation with no slewing, the system has inherently better stability. In this poster we show our results aimed at further improving our Kalman filter estimation accuracy by including not only temporal correlations but also spatial correlations among neighboring pixels in the images. This technique is known as a Gaussian process Kalman filter (GPKF). We also demonstrate the advantages of using a Kalman filter rather than RDI by simulating a real space exoplanet detection mission.
NASA Astrophysics Data System (ADS)
Schafhirt, S.; Kaufer, D.; Cheng, P. W.
2014-12-01
In recent years many advanced load simulation tools, allowing an aero-servo-hydroelastic analyses of an entire offshore wind turbine, have been developed and verified. Nowadays, even an offshore wind turbine with a complex support structure such as a jacket can be analysed. However, the computational effort rises significantly with an increasing level of details. This counts especially for offshore wind turbines with lattice support structures, since those models do naturally have a higher number of nodes and elements than simpler monopile structures. During the design process multiple load simulations are demanded to obtain an optimal solution. In the view of pre-design tasks it is crucial to apply load simulations which keep the simulation quality and the computational effort in balance. The paper will introduce a reference wind turbine model consisting of the REpower5M wind turbine and a jacket support structure with a high level of detail. In total twelve variations of this reference model are derived and presented. Main focus is to simplify the models of the support structure and the foundation. The reference model and the simplified models are simulated with the coupled simulation tool Flex5-Poseidon and analysed regarding frequencies, fatigue loads, and ultimate loads. A model has been found which reaches an adequate increase of simulation speed while holding the results in an acceptable range compared to the reference results.
Construction of a pulse-coupled dipole network capable of fear-like and relief-like responses
NASA Astrophysics Data System (ADS)
Lungsi Sharma, B.
2016-07-01
The challenge for neuroscience as an interdisciplinary programme is the integration of ideas among the disciplines to achieve a common goal. This paper deals with the problem of deriving a pulse-coupled neural network that is capable of demonstrating behavioural responses (fear-like and relief-like). Current pulse-coupled neural networks are designed mostly for engineering applications, particularly image processing. The discovered neural network was constructed using the method of minimal anatomies approach. The behavioural response of a level-coded activity-based model was used as a reference. Although the spiking-based model and the activity-based model are of different scales, the use of model-reference principle means that the characteristics that is referenced is its functional properties. It is demonstrated that this strategy of dissection and systematic construction is effective in the functional design of pulse-coupled neural network system with nonlinear signalling. The differential equations for the elastic weights in the reference model are replicated in the pulse-coupled network geometrically. The network reflects a possible solution to the problem of punishment and avoidance. The network developed in this work is a new network topology for pulse-coupled neural networks. Therefore, the model-reference principle is a powerful tool in connecting neuroscience disciplines. The continuity of concepts and phenomena is further maintained by systematic construction using methods like the method of minimal anatomies.
NASA Astrophysics Data System (ADS)
Aksenova, Olesya; Pachkina, Anna
2017-11-01
The article deals with the problem of necessity of educational process transformation to meet the requirements of modern miming industry; cooperative developing of new educational programs and implementation of educational process taking into account modern manufacturability. The paper proves the idea of introduction into mining professionals learning process studying of three-dimensional models of surface technological complex, ore reserves and underground digging complex as well as creating these models in different graphic editors and working with the information analysis model obtained on the basis of these three-dimensional models. The technological process of manless coal mining at the premises of the mine Polysaevskaya controlled by the information analysis models built on the basis of three-dimensional models of individual objects and technological process as a whole, and at the same time requiring the staff able to use the programs of three-dimensional positioning in the miners and equipment global frame of reference is covered.
A Four-Stage Model for Planning Computer-Based Instruction.
ERIC Educational Resources Information Center
Morrison, Gary R.; Ross, Steven M.
1988-01-01
Describes a flexible planning process for developing computer based instruction (CBI) in which the CBI design is implemented on paper between the lesson design and the program production. A four-stage model is explained, including (1) an initial flowchart, (2) storyboards, (3) a detailed flowchart, and (4) an evaluation. (16 references)…
Students' Problem-Solving in Mechanics: Preference of a Process Based Model.
ERIC Educational Resources Information Center
Stavy, Ruth; And Others
Research in science and mathematics education has indicated that students often use inappropriate models for solving problems because they tend to mentally represent a problem according to surface features instead of referring to scientific concepts and features. The objective of the study reported in this paper was to determine whether 34 Israeli…
Teaching Supply Chain Management Complexities: A SCOR Model Based Classroom Simulation
ERIC Educational Resources Information Center
Webb, G. Scott; Thomas, Stephanie P.; Liao-Troth, Sara
2014-01-01
The SCOR (Supply Chain Operations Reference) Model Supply Chain Classroom Simulation is an in-class experiential learning activity that helps students develop a holistic understanding of the processes and challenges of supply chain management. The simulation has broader learning objectives than other supply chain related activities such as the…
Mental Visualization of Objects from Cross-Sectional Images
ERIC Educational Resources Information Center
Wu, Bing; Klatzky, Roberta L.; Stetten, George D.
2012-01-01
We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object…
The Design and Evaluation of Teaching Experiments in Computer Science.
ERIC Educational Resources Information Center
Forcheri, Paola; Molfino, Maria Teresa
1992-01-01
Describes a relational model that was developed to provide a framework for the design and evaluation of teaching experiments for the introduction of computer science in secondary schools in Italy. Teacher training is discussed, instructional materials are considered, and use of the model for the evaluation process is described. (eight references)…
Growth in Mathematical Understanding: How Can We Characterise It and How Can We Represent It?
ERIC Educational Resources Information Center
Pirie, Susan; Kieren, Thomas
1994-01-01
Proposes a model for the growth of mathematical understanding based on the consideration of understanding as a whole, dynamic, leveled but nonlinear process. Illustrates the model using the concept of fractions. How to map the growth of understanding is explained in detail. (Contains 26 references.) (MKR)
Rodrigues, J M; Trombert-Paviot, B; Baud, R; Wagner, J; Meusnier-Carriot, F
1998-01-01
GALEN has developed a language independent common reference model based on a medically oriented ontology and practical tools and techniques for managing healthcare terminology including natural language processing. GALEN-IN-USE is the current phase which applied the modelling and the tools to the development or the updating of coding systems for surgical procedures in different national coding centers co-operating within the European Federation of Coding Centre (EFCC) to create a language independent knowledge repository for multicultural Europe. We used an integrated set of artificial intelligence terminology tools named CLAssification Manager workbench to process French professional medical language rubrics into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation we generate controlled French natural language. The French national coding centre is then able to retrieve the initial professional rubrics with different categories of concepts, to compare the professional language proposed by expert clinicians to the French generated controlled vocabulary and to finalize the linguistic labels of the coding system in relation with the meanings of the conceptual system structure.
The Swedish strategy and method for development of a national healthcare information architecture.
Rosenälv, Jessica; Lundell, Karl-Henrik
2012-01-01
"We need a precise framework of regulations in order to maintain appropriate and structured health care documentation that ensures that the information maintains a sufficient level of quality to be used in treatment, in research and by the actual patient. The users shall be aided by clearly and uniformly defined terms and concepts, and there should be an information structure that clarifies what to document and how to make the information more useful. Most of all, we need to standardize the information, not just the technical systems." (eHälsa - nytta och näring, Riksdag report 2011/12:RFR5, p. 37). In 2010, the Swedish Government adopted the National e-Health - the national strategy for accessible and secure information in healthcare. The strategy is a revision and extension of the previous strategy from 2006, which was used as input for the most recent efforts to develop a national information structure utilizing business-oriented generic models. A national decision on healthcare informatics standards was made by the Swedish County Councils, which decided to follow and use EN/ISO 13606 as a standard for the development of a universally applicable information structure, including archetypes and templates. The overall aim of the Swedish strategy for development of National Healthcare Information Architecture is to achieve high level semantic interoperability for clinical content and clinical contexts. High level semantic interoperability requires consistently structured clinical data and other types of data with coherent traceability to be mapped to reference clinical models. Archetypes that are formal definitions of the clinical and demographic concepts and some administrative data were developed. Each archetype describes the information structure and content of overarching core clinical concepts. Information that is defined in archetypes should be used for different purposes. Generic clinical process model was made concrete and analyzed. For each decision-making step in the process where information is processed, the amount and type of information and its structure were defined in terms of reference templates. Reference templates manage clinical, administrative and demographic types of information in a specific clinical context. Based on a survey of clinical processes at the reference level, the identification of specific clinical processes such as diabetes and congestive heart failure in adults were made. Process-specific templates were defined by using reference templates and populated with information that was relevant to each health problem in a specific clinical context. Throughout this process, medical data for knowledge management were collected for each health problem. Parallel with the efforts to define archetypes and templates, terminology binding work is on-going. Different strategies are used depending on the terminology binding level.
Knowing the SCOR: using business metrics to gain measurable improvements.
Malin, Jane H
2006-07-01
By using the Supply Chain Operations Reference model, one New York hospital was able to define and measure its supply chains, determine the weak links in its processes, and identify necessary improvements.
Rosenthal, Jennifer L; Okumura, Megumi J; Hernandez, Lenore; Li, Su-Ting T; Rehm, Roberta S
2016-01-01
Children with special health care needs often require health services that are only provided at subspecialty centers. Such children who present to nonspecialty hospitals might require a hospital-to-hospital transfer. When transitioning between medical settings, communication is an integral aspect that can affect the quality of patient care. The objectives of the study were to identify barriers and facilitators to effective interfacility pediatric transfer communication to general pediatric floors from the perspectives of referring and accepting physicians, and then develop a conceptual model for effective interfacility transfer communication. This was a single-center qualitative study using grounded theory methodology. Referring and accepting physicians of children with special health care needs were interviewed. Four researchers coded the data using ATLAS.ti (version 7, Scientific Software Development GMBH, Berlin, Germany), using a 2-step process of open coding, followed by focused coding until no new codes emerged. The research team reached consensus on the final major categories and subsequently developed a conceptual model. Eight referring and 9 accepting physicians were interviewed. Theoretical coding resulted in 3 major categories: streamlined transfer process, quality handoff and 2-way communication, and positive relationships between physicians across facilities. The conceptual model unites these categories and shows how these categories contribute to effective interfacility transfer communication. Proposed interventions involved standardizing the communication process and incorporating technology such as telemedicine during transfers. Communication is perceived to be an integral component of interfacility transfers. We recommend that transfer systems be re-engineered to make the process more streamlined, to improve the quality of the handoff and 2-way communication, and to facilitate positive relationships between physicians across facilities. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
A reference architecture for the component factory
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Caldiera, Gianluigi; Cantone, Giovanni
1992-01-01
Software reuse can be achieved through an organization that focuses on utilization of life cycle products from previous developments. The component factory is both an example of the more general concepts of experience and domain factory and an organizational unit worth being considered independently. The critical features of such an organization are flexibility and continuous improvement. In order to achieve these features we can represent the architecture of the factory at different levels of abstraction and define a reference architecture from which specific architectures can be derived by instantiation. A reference architecture is an implementation and organization independent representation of the component factory and its environment. The paper outlines this reference architecture, discusses the instantiation process, and presents some examples of specific architectures by comparing them in the framework of the reference model.
NASA Astrophysics Data System (ADS)
Kosnikov, Yu N.; Kuzmin, A. V.; Ho, Hoang Thai
2018-05-01
The article is devoted to visualization of spatial objects’ morphing described by the set of unordered reference points. A two-stage model construction is proposed to change object’s form in real time. The first (preliminary) stage is interpolation of the object’s surface by radial basis functions. Initial reference points are replaced by new spatially ordered ones. Reference points’ coordinates change patterns during the process of morphing are assigned. The second (real time) stage is surface reconstruction by blending functions of orthogonal basis. Finite differences formulas are applied to increase the productivity of calculations.
The CMMI Product Suite and International Standards
2006-07-01
standards: “2.3 Reference Documents 2.3.1 Applicable ISO /IEC documents, including ISO /IEC 12207 and ISO /IEC 15504.” “3.1 Development User Requirements...related international standards such as ISO 9001:2000, 12207 , 15288 © 2006 by Carnegie Mellon University Page 12 Key Supplements Needed...the Measurement Framework in ISO /IEC 15504; and • the Process Reference Model included in ISO /IEC 12207 . A possible approach has been developed for
Multiple reference frames in haptic spatial processing
NASA Astrophysics Data System (ADS)
Volčič, R.
2008-08-01
The present thesis focused on haptic spatial processing. In particular, our interest was directed to the perception of spatial relations with the main focus on the perception of orientation. To this end, we studied haptic perception in different tasks, either in isolation or in combination with vision. The parallelity task, where participants have to match the orientations of two spatially separated bars, was used in its two-dimensional and three-dimensional versions in Chapter 2 and Chapter 3, respectively. The influence of non-informative vision and visual interference on performance in the parallelity task was studied in Chapter 4. A different task, the mental rotation task, was introduced in a purely haptic study in Chapter 5 and in a visuo-haptic cross-modal study in Chapter 6. The interaction of multiple reference frames and their influence on haptic spatial processing were the common denominators of these studies. In this thesis we approached the problems of which reference frames play the major role in haptic spatial processing and how the relative roles of distinct reference frames change depending on the available information and the constraints imposed by different tasks. We found that the influence of a reference frame centered on the hand was the major cause of the deviations from veridicality observed in both the two-dimensional and three-dimensional studies. The results were described by a weighted average model, in which the hand-centered egocentric reference frame is supposed to have a biasing influence on the allocentric reference frame. Performance in haptic spatial processing has been shown to depend also on sources of information or processing that are not strictly connected to the task at hand. When non-informative vision was provided, a beneficial effect was observed in the haptic performance. This improvement was interpreted as a shift from the egocentric to the allocentric reference frame. Moreover, interfering visual information presented in the vicinity of the haptic stimuli parametrically modulated the magnitude of the deviations. The influence of the hand-centered reference frame was shown also in the haptic mental rotation task where participants were quicker in judging the parity of objects when these were aligned with respect to the hands than when they were physically aligned. Similarly, in the visuo-haptic cross-modal mental rotation task the parity judgments were influenced by the orientation of the exploring hand with respect to the viewing direction. This effect was shown to be modulated also by an intervening temporal delay that supposedly counteracts the influence of the hand-centered reference frame. We suggest that the hand-centered reference frame is embedded in a hierarchical structure of reference frames where some of these emerge depending on the demands and the circumstances of the surrounding environment and the needs of an active perceiver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowther, M.A.; Moskowitz, P.D.
1981-07-01
Sample analyses and detailed documentation are presented for a Reference Material System (RMS) to estimate health and environmental risks of different material cycles and energy systems. Data inputs described include: end-use material demands, efficiency coefficients, environmental emission coefficients, fuel demand coefficients, labor productivity estimates, and occupational health and safety coefficients. Application of this model permits analysts to estimate fuel use (e.g., Btu), occupational risk (e.g., fatalities), and environmental emissions (e.g., sulfur oxide) for specific material trajectories or complete energy systems. Model uncertainty is quantitatively defined by presenting a range of estimates for each data input. Systematic uncertainty not quantified relatesmore » to the boundaries chosen for analysis and reference system specification. Although the RMS can be used to analyze material system impacts for many different energy technologies, it was specifically used to examine the health and environmental risks of producing the following four types of photovoltaic devices: silicon n/p single-crystal cells produced by a Czochralski process; silicon metal/insulator/semiconductor (MIS) cells produced by a ribbon-growing process; cadmium sulfide/copper sulfide backwall cells produced by a spray deposition process; and gallium arsenide cells with 500X concentrator produced by a modified Czochralski process. Emission coefficients for particulates, sulfur dioxide and nitrogen dioxide; solid waste; total suspended solids in water; and, where applicable, air and solid waste residuals for arsenic, cadmium, gallium, and silicon are examined and presented. Where data are available the coefficients for particulates, sulfur oxides, and nitrogen oxides include both process and on-site fuel-burning emissions.« less
Application of a mathematical model for ergonomics in lean manufacturing.
Botti, Lucia; Mora, Cristina; Regattieri, Alberto
2017-10-01
The data presented in this article are related to the research article "Integrating ergonomics and lean manufacturing principles in a hybrid assembly line" (Botti et al., 2017) [1]. The results refer to the application of the mathematical model for the design of lean processes in hybrid assembly lines, meeting both the lean principles and the ergonomic requirements for safe assembly work. Data show that the success of a lean strategy is possible when ergonomics of workers is a parameter of the assembly process design.
Leveraging People-Related Maturity Issues for Achieving Higher Maturity and Capability Levels
NASA Astrophysics Data System (ADS)
Buglione, Luigi
During the past 20 years Maturity Models (MM) become a buzzword in the ICT world. Since the initial Crosby's idea in 1979, plenty of models have been created in the Software & Systems Engineering domains, addressing various perspectives. By analyzing the content of the Process Reference Models (PRM) in many of them, it can be noticed that people-related issues have little weight in the appraisals of the capabilities of organizations while in practice they are considered as significant contributors in traditional process and organizational performance appraisals, as stressed instead in well-known Performance Management models such as MBQA, EFQM and BSC. This paper proposes some ways for leveraging people-related maturity issues merging HR practices from several types of maturity models into the organizational Business Process Model (BPM) in order to achieve higher organizational maturity and capability levels.
Profit Maximization Models for Exponential Decay Processes.
1980-08-01
assumptions could easily be analyzed in similar fashion. References [1] Bensoussan, A., Hurst , E.G. and Nislund, B., Management Applications of Modern...TVIPe OF r 04PORNT A i M0 CiH O .V9RAE PROFIT MAXIMIZATION .ODELS FOR EXPONENT IAL Technical Report DECAY PROCESSES August 1990 ~~~I. PtA’OR~idNG ONqG
Toward a Unified Modeling of Learner's Growth Process and Flow Theory
ERIC Educational Resources Information Center
Challco, Geiser C.; Andrade, Fernando R. H.; Borges, Simone S.; Bittencourt, Ig I.; Isotani, Seiji
2016-01-01
Flow is the affective state in which a learner is so engaged and involved in an activity that nothing else seems to matter. In this sense, to help students in the skill development and knowledge acquisition (referred to as learners' growth process) under optimal conditions, the instructional designers should create learning scenarios that favor…
Effect of Time Varying Gravity on DORIS processing for ITRF2013
NASA Astrophysics Data System (ADS)
Zelensky, N. P.; Lemoine, F. G.; Chinn, D. S.; Beall, J. W.; Melachroinos, S. A.; Beckley, B. D.; Pavlis, D.; Wimert, J.
2013-12-01
Computations are under way to develop a new time series of DORIS SINEX solutions to contribute to the development of the new realization of the terrestrial reference frame (c.f. ITRF2013). One of the improvements that are envisaged is the application of improved models of time-variable gravity in the background orbit modeling. At GSFC we have developed a time series of spherical harmonics to degree and order 5 (using the GOC02S model as a base), based on the processing of SLR and DORIS data to 14 satellites from 1993 to 2013. This is compared with the standard approach used in ITRF2008, based on the static model EIGEN-GL04S1 which included secular variations in only a few select coefficients. Previous work on altimeter satellite POD (c.f. TOPEX/Poseidon, Jason-1, Jason-2) has shown that the standard model is not adequate and orbit improvements are observed with application of more detailed models of time-variable gravity. In this study, we quantify the impact of TVG modeling on DORIS satellite POD, and ascertain the impact on DORIS station positions estimated weekly from 1993 to 2013. The numerous recent improvements to SLR and DORIS processing at GSFC include a more complete compliance to IERS2010 standards, improvements to SLR/DORIS measurement modeling, and improved non-conservative force modeling to DORIS satellites. These improvements will affect gravity coefficient estimates, POD, and the station solutions. Tests evaluate the impact of time varying gravity on tracking data residuals, station consistency, and the geocenter and scale reference frame parameters.
Syntactic Approach To Geometric Surface Shell Determination
NASA Astrophysics Data System (ADS)
DeGryse, Donald G.; Panton, Dale J.
1980-12-01
Autonomous terminal homing of a smart missile requires a stored reference scene of the target for which the missle is destined. The reference scene is produced from stereo source imagery by deriving a three-dimensional model containing cultural structures such as buildings, towers, bridges, and tanks. This model is obtained by the precise matching of cultural features from one image of the stereo pair to the other. In the past, this stereo matching process has relied heavily on local edge operators and a gray scale matching metric. The processing is performed line by line over the imagery and the amount of geometric control is minimal. As a result, the gross structure of the scene is determined but the derived three-dimensional data is noisy, oscillatory, and at times significantly inaccurate. This paper discusses new concepts that are currently being developed to stabilize this geometric reference preparation process. The new concepts involve the use of a structural syntax which will be used as a geometric constraint on automatic stereo matching. The syntax arises from the stereo configuration of the imaging platforms at the time of exposure and the knowledge of how various cultural structures are constructed. The syntax is used to parse a scene in terms of its cultural surfaces and to dictate to the matching process the allowable relative positions and orientations of surface edges in the image planes. Using the syntax, extensive searches using a gray scale matching metric are reduced.
Xu, Min; Zhang, Lei; Yue, Hong-Shui; Pang, Hong-Wei; Ye, Zheng-Liang; Ding, Li
2017-10-01
To establish an on-line monitoring method for extraction process of Schisandrae Chinensis Fructus, the formula medicinal material of Yiqi Fumai lyophilized injection by combining near infrared spectroscopy with multi-variable data analysis technology. The multivariate statistical process control (MSPC) model was established based on 5 normal batches in production and 2 test batches were monitored by PC scores, DModX and Hotelling T2 control charts. The results showed that MSPC model had a good monitoring ability for the extraction process. The application of the MSPC model to actual production process could effectively achieve on-line monitoring for extraction process of Schisandrae Chinensis Fructus, and can reflect the change of material properties in the production process in real time. This established process monitoring method could provide reference for the application of process analysis technology in the process quality control of traditional Chinese medicine injections. Copyright© by the Chinese Pharmaceutical Association.
Passive serialization in a multitasking environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hennessey, J.P.; Osisek, D.L.; Seigh, J.W. II
1989-02-28
In a multiprocessing system having a control program in which data objects are shared among processes, this patent describes a method for serializing references to a data object by the processes so as to prevent invalid references to the data object by any process when an operation requiring exclusive access is performed by another process, comprising the steps of: permitting the processes to reference data objects on a shared access basis without obtaining a shared lock; monitoring a point of execution of the control program which is common to all processes in the system, which occurs regularly in the process'more » execution and across which no references to any data object can be maintained by any process, except references using locks; establishing a system reference point which occurs after each process in the system has passed the point of execution at least once since the last such system reference point; requesting an operation requiring exclusive access on a selected data object; preventing subsequent references by other processes to the selected data object; waiting until two of the system references points have occurred; and then performing the requested operation.« less
Process for computing geometric perturbations for probabilistic analysis
Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX
2012-04-10
A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.
Design and Principles Enabling the Space Reference FOM
NASA Technical Reports Server (NTRS)
Moeller, Bjoern; Dexter, Dan; Madden, Michael; Crues, Edwin Z.; Garro, Alfredo; Skuratovskiy, Anton
2017-01-01
A first complete draft of the Simulation Interoperability Standards Organization (SISO) Space Reference Federation Object Model (FOM) has now been produced. This paper provides some insights into its capabilities and discusses the opportunity for reuse in other domains. The focus of this first version of the standard is execution control, time management and coordinate systems, well-known reference frames, as well as some basic support for physical entities. The biggest part of the execution control is the coordinated start-up process. This process contains a number of steps, including checking of required federates, handling of early versus late joiners, sharing of federation wide configuration data and multi-phase initialization. An additional part of Execution Control is the coordinated and synchronized transition between Run mode, Freeze mode and Shutdown. For time management, several time lines are defined, including real-time, scenario time, High Level Architecture (HLA) logical time and physical time. A strategy for mixing simulations that use different time steps is introduced, as well as an approach for finding common boundaries for fully synchronized freeze. For describing spatial information, a mechanism with a set of reference frames is specified. Each reference frame has a position and orientation related to a parent reference frame. This makes it possible for federates to perform calculations in reference frames that are convenient to them. An operation on the Moon can be performed using lunar coordinates whereas an operation on Earth can be performed using Earth coordinates. At the same time, coordinates in one reference frame have an unambiguous relationship to a coordinate in another reference frame. While the Space Reference FOM is originally being developed for Space operations, the authors believe that many parts of it can be reused for any simulation that has a focus on physical processes with one or more coordinate systems, and require high fidelity and repeatability.
Modeling Cross-Situational Word–Referent Learning: Prior Questions
Yu, Chen; Smith, Linda B.
2013-01-01
Both adults and young children possess powerful statistical computation capabilities—they can infer the referent of a word from highly ambiguous contexts involving many words and many referents by aggregating cross-situational statistical information across contexts. This ability has been explained by models of hypothesis testing and by models of associative learning. This article describes a series of simulation studies and analyses designed to understand the different learning mechanisms posited by the 2 classes of models and their relation to each other. Variants of a hypothesis-testing model and a simple or dumb associative mechanism were examined under different specifications of information selection, computation, and decision. Critically, these 3 components of the models interact in complex ways. The models illustrate a fundamental tradeoff between amount of data input and powerful computations: With the selection of more information, dumb associative models can mimic the powerful learning that is accomplished by hypothesis-testing models with fewer data. However, because of the interactions among the component parts of the models, the associative model can mimic various hypothesis-testing models, producing the same learning patterns but through different internal components. The simulations argue for the importance of a compositional approach to human statistical learning: the experimental decomposition of the processes that contribute to statistical learning in human learners and models with the internal components that can be evaluated independently and together. PMID:22229490
A Course on Multimedia Environmental Transport, Exposure, and Risk Assessment.
ERIC Educational Resources Information Center
Cohen, Yoram; And Others
1990-01-01
Included are the general guidelines, outline, a summary of major intermedia transport processes, model features, a discussion of multimedia exposure and health risk, and a list of 50 suggested references for this course. (CW)
An Environmental Management Maturity Model of Construction Programs Using the AHP-Entropy Approach.
Bai, Libiao; Wang, Hailing; Huang, Ning; Du, Qiang; Huang, Youdan
2018-06-23
The accelerating process of urbanization in China has led to considerable opportunities for the development of construction projects, however, environmental issues have become an important constraint on the implementation of these projects. To quantitatively describe the environmental management capabilities of such projects, this paper proposes a 2-dimensional Environmental Management Maturity Model of Construction Program (EMMMCP) based on an analysis of existing projects, group management theory and a management maturity model. In this model, a synergetic process was included to compensate for the lack of consideration of synergies in previous studies, and it was involved in the construction of the first dimension, i.e., the environmental management index system. The second dimension, i.e., the maturity level of environment management, was then constructed by redefining the hierarchical characteristics of construction program (CP) environmental management maturity. Additionally, a mathematical solution to this proposed model was derived via the Analytic Hierarchy Process (AHP)-entropy approach. To verify the effectiveness and feasibility of this proposed model, a computational experiment was conducted, and the results show that this approach could not only measure the individual levels of different processes, but also achieve the most important objective of providing a reference for stakeholders when making decisions on the environmental management of construction program, which reflects this model is reasonable for evaluating the level of environmental management maturity in CP. To our knowledge, this paper is the first study to evaluate the environmental management maturity levels of CP, which would fill the gap between project program management and environmental management and provide a reference for relevant management personnel to enhance their environmental management capabilities.
Bayne, Jay S
2008-06-01
In support of a generalization of systems theory, this paper introduces a new approach in modeling complex distributed systems. It offers an analytic framework for describing the behavior of interactive cyberphysical systems (CPSs), which are networked stationary or mobile information systems responsible for the real-time governance of physical processes whose behaviors unfold in cyberspace. The framework is predicated on a cyberspace-time reference model comprising three spatial dimensions plus time. The spatial domains include geospatial, infospatial, and sociospatial references, the latter describing relationships among sovereign enterprises (rational agents) that choose voluntarily to organize and interoperate for individual and mutual benefit through geospatial (physical) and infospatial (logical) transactions. Of particular relevance to CPSs are notions of timeliness and value, particularly as they relate to the real-time governance of physical processes and engagements with other cooperating CPS. Our overarching interest, as with celestial mechanics, is in the formation and evolution of clusters of cyberspatial objects and the federated systems they form.
Adaptive control and noise suppression by a variable-gain gradient algorithm
NASA Technical Reports Server (NTRS)
Merhav, S. J.; Mehta, R. S.
1987-01-01
An adaptive control system based on normalized LMS filters is investigated. The finite impulse response of the nonparametric controller is adaptively estimated using a given reference model. Specifically, the following issues are addressed: The stability of the closed loop system is analyzed and heuristically established. Next, the adaptation process is studied for piecewise constant plant parameters. It is shown that by introducing a variable-gain in the gradient algorithm, a substantial reduction in the LMS adaptation rate can be achieved. Finally, process noise at the plant output generally causes a biased estimate of the controller. By introducing a noise suppression scheme, this bias can be substantially reduced and the response of the adapted system becomes very close to that of the reference model. Extensive computer simulations validate these and demonstrate assertions that the system can rapidly adapt to random jumps in plant parameters.
NASA Astrophysics Data System (ADS)
Briscoe, Carol
This qualitative case study focused on the role of cognitive referents in the sense-making process of one teacher as he attempted to change his classroom science assessment. The interpretations identify cultural myths, conceptual metonymys, as well as personally constructed beliefs as referents that constrained change. The teacher's cognitive struggle to make sense of assessment and his role as assessor are linked to conflicting referents he used in varying contexts including day-to-day assessment and summative assessment settings. The results of the study suggest that cognitive referents are important influences in driving how a teacher thinks about assessment and may constrain an individual teacher's implementation of innovative practices. Accordingly, identification of referents such as myths, their associated beliefs, and metonymic conceptual models that teachers use to make sense of their actions is an important first step in developing an understanding of constraints to educational change.
A new model integrating short- and long-term aging of copper added to soils
Zeng, Saiqi; Li, Jumei; Wei, Dongpu
2017-01-01
Aging refers to the processes by which the bioavailability/toxicity, isotopic exchangeability, and extractability of metals added to soils decline overtime. We studied the characteristics of the aging process in copper (Cu) added to soils and the factors that affect this process. Then we developed a semi-mechanistic model to predict the lability of Cu during the aging process with descriptions of the diffusion process using complementary error function. In the previous studies, two semi-mechanistic models to separately predict short-term and long-term aging of Cu added to soils were developed with individual descriptions of the diffusion process. In the short-term model, the diffusion process was linearly related to the square root of incubation time (t1/2), and in the long-term model, the diffusion process was linearly related to the natural logarithm of incubation time (lnt). Both models could predict short-term or long-term aging processes separately, but could not predict the short- and long-term aging processes by one model. By analyzing and combining the two models, we found that the short- and long-term behaviors of the diffusion process could be described adequately using the complementary error function. The effect of temperature on the diffusion process was obtained in this model as well. The model can predict the aging process continuously based on four factors—soil pH, incubation time, soil organic matter content and temperature. PMID:28820888
NASA Astrophysics Data System (ADS)
Candra, S.; Batan, I. M. L.; Berata, W.; Pramono, A. S.
2017-11-01
This paper presents the mathematical approach of minimum blank holder force to prevent wrinkling in deep drawing process of the cylindrical cup. Based on the maximum of minor-major strain ratio, the slab method was applied to determine the modeling of minimum variable blank holder force (VBHF) and it compared to FE simulation. The Tin steel sheet of T4-CA grade, with the thickness of 0.2 mm was used in this study. The modeling of minimum VBHF can be used as a simple reference to prevent wrinkling in deep drawing.
Computational Models of Cognitive Control
O’Reilly, Randall C.; Herd, Seth A.; Pauli, Wolfgang M.
2010-01-01
Cognitive control refers to the ability to perform task-relevant processing in the face of other distractions or other forms of interference, in the absence of strong environmental support. It depends on the integrity of the prefrontal cortex and associated biological structures (e.g., the basal ganglia). Computational models have played an influential role in developing our understanding of this system, and we review current developments in three major areas: dynamic gating of prefrontal representations, hierarchies in the prefrontal cortex, and reward, motivation, and goal-related processing in prefrontal cortex. Models in these and other areas are advancing the field further forward. PMID:20185294
Effects of Uncertainties in Electric Field Boundary Conditions for Ring Current Simulations
NASA Astrophysics Data System (ADS)
Chen, Margaret W.; O'Brien, T. Paul; Lemon, Colby L.; Guild, Timothy B.
2018-01-01
Physics-based simulation results can vary widely depending on the applied boundary conditions. As a first step toward assessing the effect of boundary conditions on ring current simulations, we analyze the uncertainty of cross-polar cap potentials (CPCP) on electric field boundary conditions applied to the Rice Convection Model-Equilibrium (RCM-E). The empirical Weimer model of CPCP is chosen as the reference model and Defense Meteorological Satellite Program CPCP measurements as the reference data. Using temporal correlations from a statistical analysis of the "errors" between the reference model and data, we construct a Monte Carlo CPCP discrete time series model that can be generalized to other model boundary conditions. RCM-E simulations using electric field boundary conditions from the reference model and from 20 randomly generated Monte Carlo discrete time series of CPCP are performed for two large storms. During the 10 August 2000 storm main phase, the proton density at 10
Discrete post-processing of total cloud cover ensemble forecasts
NASA Astrophysics Data System (ADS)
Hemri, Stephan; Haiden, Thomas; Pappenberger, Florian
2017-04-01
This contribution presents an approach to post-process ensemble forecasts for the discrete and bounded weather variable of total cloud cover. Two methods for discrete statistical post-processing of ensemble predictions are tested. The first approach is based on multinomial logistic regression, the second involves a proportional odds logistic regression model. Applying them to total cloud cover raw ensemble forecasts from the European Centre for Medium-Range Weather Forecasts improves forecast skill significantly. Based on station-wise post-processing of raw ensemble total cloud cover forecasts for a global set of 3330 stations over the period from 2007 to early 2014, the more parsimonious proportional odds logistic regression model proved to slightly outperform the multinomial logistic regression model. Reference Hemri, S., Haiden, T., & Pappenberger, F. (2016). Discrete post-processing of total cloud cover ensemble forecasts. Monthly Weather Review 144, 2565-2577.
Practical Use of Operation Data in the Process Industry
NASA Astrophysics Data System (ADS)
Kano, Manabu
This paper aims to reveal real problems in the process industry and introduce recent development to solve such problems from the viewpoint of effective use of operation data. Two topics are discussed: virtual sensor and process control. First, in order to clarify the present state and problems, a part of our recent questionnaire survey of process control is quoted. It is emphasized that maintenance is a key issue not only for soft-sensors but also for controllers. Then, new techniques are explained. The first one is correlation-based just-in-time modeling (CoJIT), which can realize higher prediction performance than conventional methods and simplify model maintenance. The second is extended fictitious reference iterative tuning (E-FRIT), which can realize data-driven PID control parameter tuning without process modeling. The great usefulness of these techniques are demonstrated through their industrial applications.
Effects of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame
NASA Astrophysics Data System (ADS)
Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Dach, R.; Heflin, M. B.; Gross, R. S.; König, R.; Lemoine, F. G.; MacMillan, D. S.; Parker, J. W.; van Dam, T. M.; Wu, X.
2013-12-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS global networks used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, the effect of non-tidal atmospheric loading (NTAL) corrections on the TRF is assessed adopting a Remove/Restore approach: (i) Focusing on the a-posteriori approach, the NTAL model derived from the National Center for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations. (ii) Adopting a Kalman-filter based approach, a linear TRF is estimated combining the 4 SG solutions free from NTAL displacements. (iii) Linear fits to the NTAL displacements removed at step (i) are restored to the linear reference frame estimated at (ii). The velocity fields of the (standard) linear reference frame in which the NTAL model has not been removed and the one in which the model has been removed/restored are compared and discussed.
NASA Astrophysics Data System (ADS)
Dungan, J. L.; Wang, W.; Hashimoto, H.; Michaelis, A.; Milesi, C.; Ichii, K.; Nemani, R. R.
2009-12-01
In support of NACP, we are conducting an ensemble modeling exercise using the Terrestrial Observation and Prediction System (TOPS) to evaluate uncertainties among ecosystem models, satellite datasets, and in-situ measurements. The models used in the experiment include public-domain versions of Biome-BGC, LPJ, TOPS-BGC, and CASA, driven by a consistent set of climate fields for North America at 8km resolution and daily/monthly time steps over the period of 1982-2006. The reference datasets include MODIS Gross Primary Production (GPP) and Net Primary Production (NPP) products, Fluxnet measurements, and other observational data. The simulation results and the reference datasets are consistently processed and systematically compared in the climate (temperature-precipitation) space; in particular, an alternative to the Taylor diagram is developed to facilitate model-data intercomparisons in multi-dimensional space. The key findings of this study indicate that: the simulated GPP/NPP fluxes are in general agreement with observations over forests, but are biased low (underestimated) over non-forest types; large uncertainties of biomass and soil carbon stocks are found among the models (and reference datasets), often induced by seemingly “small” differences in model parameters and implementation details; the simulated Net Ecosystem Production (NEP) mainly responds to non-respiratory disturbances (e.g. fire) in the models and therefore is difficult to compare with flux data; and the seasonality and interannual variability of NEP varies significantly among models and reference datasets. These findings highlight the problem inherent in relying on only one modeling approach to map surface carbon fluxes and emphasize the pressing necessity of expanded and enhanced monitoring systems to narrow critical structural and parametrical uncertainties among ecosystem models.
Rapid Prototyping Integrated With Nondestructive Evaluation and Finite Element Analysis
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Baaklini, George Y.
2001-01-01
Most reverse engineering approaches involve imaging or digitizing an object then creating a computerized reconstruction that can be integrated, in three dimensions, into a particular design environment. Rapid prototyping (RP) refers to the practical ability to build high-quality physical prototypes directly from computer aided design (CAD) files. Using rapid prototyping, full-scale models or patterns can be built using a variety of materials in a fraction of the time required by more traditional prototyping techniques (refs. 1 and 2). Many software packages have been developed and are being designed to tackle the reverse engineering and rapid prototyping issues just mentioned. For example, image processing and three-dimensional reconstruction visualization software such as Velocity2 (ref. 3) are being used to carry out the construction process of three-dimensional volume models and the subsequent generation of a stereolithography file that is suitable for CAD applications. Producing three-dimensional models of objects from computed tomography (CT) scans is becoming a valuable nondestructive evaluation methodology (ref. 4). Real components can be rendered and subjected to temperature and stress tests using structural engineering software codes. For this to be achieved, accurate high-resolution images have to be obtained via CT scans and then processed, converted into a traditional file format, and translated into finite element models. Prototyping a three-dimensional volume of a composite structure by reading in a series of two-dimensional images generated via CT and by using and integrating commercial software (e.g. Velocity2, MSC/PATRAN (ref. 5), and Hypermesh (ref. 6)) is being applied successfully at the NASA Glenn Research Center. The building process from structural modeling to the analysis level is outlined in reference 7. Subsequently, a stress analysis of a composite cooling panel under combined thermomechanical loading conditions was performed to validate this process.
Alternative Methods for Estimating Plane Parameters Based on a Point Cloud
NASA Astrophysics Data System (ADS)
Stryczek, Roman
2017-12-01
Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.
Hou, Xiang-Mei; Zhang, Lei; Yue, Hong-Shui; Ju, Ai-Chun; Ye, Zheng-Liang
2016-07-01
To study and establish a monitoring method for macroporous resin column chromatography process of salvianolic acids by using near infrared spectroscopy (NIR) as a process analytical technology (PAT).The multivariate statistical process control (MSPC) model was developed based on 7 normal operation batches, and 2 test batches (including one normal operation batch and one abnormal operation batch) were used to verify the monitoring performance of this model. The results showed that MSPC model had a good monitoring ability for the column chromatography process. Meanwhile, NIR quantitative calibration model was established for three key quality indexes (rosmarinic acid, lithospermic acid and salvianolic acid B) by using partial least squares (PLS) algorithm. The verification results demonstrated that this model had satisfactory prediction performance. The combined application of the above two models could effectively achieve real-time monitoring for macroporous resin column chromatography process of salvianolic acids, and can be used to conduct on-line analysis of key quality indexes. This established process monitoring method could provide reference for the development of process analytical technology for traditional Chinese medicines manufacturing. Copyright© by the Chinese Pharmaceutical Association.
ERIC Educational Resources Information Center
Rampai, Nattaphon; Sopeerak, Saroch
2011-01-01
This research explores that the model of knowledge management and web technology for teachers' professional development as well as its impact in the classroom on learning and teaching, especially in pre-service teacher's competency and practices that refer to knowledge creating, analyzing, nurturing, disseminating, and optimizing process as part…
Learner Perception of Personal Spaces of Information (PSIs): A Mental Model Analysis
ERIC Educational Resources Information Center
Hardof-Jaffe, Sharon; Aladjem, Ruthi
2018-01-01
A personal space of information (PSI) refers to the collection of digital information items created, saved and organized, on digital devices. PSIs play a central and significant role in learning processes. This study explores the mental models and perceptions of PSIs by learners, using drawing analysis. Sixty-three graduate students were asked to…
The Role of Metacognition in the Language Teaching Profession
ERIC Educational Resources Information Center
Nodoushan, Mohammad Ali Salmani
2008-01-01
Metacognition is a concept in psychology that refers to a variety of self-awareness process that help learners learn better. It grew out of the developments over the past few decades of cognitive models of learning. This paper presents a brief overview of these models and discusses their main features. It begins with a discussion of behavioristic…
The Role of Metacognition in the Language Teaching Profession
ERIC Educational Resources Information Center
Salmani Nodoushan, Mohammad Ali
2008-01-01
Metacognition is a concept in psychology that refers to a variety of self-awareness process that help learners learn better. It grew out of the developments over the past few decades of cognitive models of learning. This paper will present a brief overview of these models and discuss their main features. It begins with a discussion of…
Hindsight Bias Doesn't Always Come Easy: Causal Models, Cognitive Effort, and Creeping Determinism
ERIC Educational Resources Information Center
Nestler, Steffen; Blank, Hartmut; von Collani, Gernot
2008-01-01
Creeping determinism, a form of hindsight bias, refers to people's hindsight perceptions of events as being determined or inevitable. This article proposes, on the basis of a causal-model theory of creeping determinism, that the underlying processes are effortful, and hence creeping determinism should disappear when individuals lack the cognitive…
Modeling the Virtual Machine Launching Overhead under Fermicloud
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele; Wu, Hao; Ren, Shangping
FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resourcemore » (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.« less
Granularity as a Cognitive Factor in the Effectiveness of Business Process Model Reuse
NASA Astrophysics Data System (ADS)
Holschke, Oliver; Rake, Jannis; Levina, Olga
Reusing design models is an attractive approach in business process modeling as modeling efficiency and quality of design outcomes may be significantly improved. However, reusing conceptual models is not a cost-free effort, but has to be carefully designed. While factors such as psychological anchoring and task-adequacy in reuse-based modeling tasks have been investigated, information granularity as a cognitive concept has not been at the center of empirical research yet. We hypothesize that business process granularity as a factor in design tasks under reuse has a significant impact on the effectiveness of resulting business process models. We test our hypothesis in a comparative study employing high and low granularities. The reusable processes provided were taken from widely accessible reference models for the telecommunication industry (enhanced Telecom Operations Map). First experimental results show that Recall in tasks involving coarser granularity is lower than in cases of finer granularity. These findings suggest that decision makers in business process management should be considerate with regard to the implementation of reuse mechanisms of different granularities. We realize that due to our small sample size results are not statistically significant, but this preliminary run shows that it is ready for running on a larger scale.
Quality control for quantitative PCR based on amplification compatibility test.
Tichopad, Ales; Bar, Tzachi; Pecen, Ladislav; Kitchen, Robert R; Kubista, Mikael; Pfaffl, Michael W
2010-04-01
Quantitative qPCR is a routinely used method for the accurate quantification of nucleic acids. Yet it may generate erroneous results if the amplification process is obscured by inhibition or generation of aberrant side-products such as primer dimers. Several methods have been established to control for pre-processing performance that rely on the introduction of a co-amplified reference sequence, however there is currently no method to allow for reliable control of the amplification process without directly modifying the sample mix. Herein we present a statistical approach based on multivariate analysis of the amplification response data generated in real-time. The amplification trajectory in its most resolved and dynamic phase is fitted with a suitable model. Two parameters of this model, related to amplification efficiency, are then used for calculation of the Z-score statistics. Each studied sample is compared to a predefined reference set of reactions, typically calibration reactions. A probabilistic decision for each individual Z-score is then used to identify the majority of inhibited reactions in our experiments. We compare this approach to univariate methods using only the sample specific amplification efficiency as reporter of the compatibility. We demonstrate improved identification performance using the multivariate approach compared to the univariate approach. Finally we stress that the performance of the amplification compatibility test as a quality control procedure depends on the quality of the reference set. Copyright 2010 Elsevier Inc. All rights reserved.
A three-talk model for shared decision making: multistage consultation process
Durand, Marie Anne; Song, Julia; Aarts, Johanna; Barr, Paul J; Berger, Zackary; Cochran, Nan; Frosch, Dominick; Galasiński, Dariusz; Gulbrandsen, Pål; Han, Paul K J; Härter, Martin; Kinnersley, Paul; Lloyd, Amy; Mishra, Manish; Perestelo-Perez, Lilisbeth; Scholl, Isabelle; Tomori, Kounosuke; Trevena, Lyndal; Witteman, Holly O; Van der Weijden, Trudy
2017-01-01
Objectives To revise an existing three-talk model for learning how to achieve shared decision making, and to consult with relevant stakeholders to update and obtain wider engagement. Design Multistage consultation process. Setting Key informant group, communities of interest, and survey of clinical specialties. Participants 19 key informants, 153 member responses from multiple communities of interest, and 316 responses to an online survey from medically qualified clinicians from six specialties. Results After extended consultation over three iterations, we revised the three-talk model by making changes to one talk category, adding the need to elicit patient goals, providing a clear set of tasks for each talk category, and adding suggested scripts to illustrate each step. A new three-talk model of shared decision making is proposed, based on “team talk,” “option talk,” and “decision talk,” to depict a process of collaboration and deliberation. Team talk places emphasis on the need to provide support to patients when they are made aware of choices, and to elicit their goals as a means of guiding decision making processes. Option talk refers to the task of comparing alternatives, using risk communication principles. Decision talk refers to the task of arriving at decisions that reflect the informed preferences of patients, guided by the experience and expertise of health professionals. Conclusions The revised three-talk model of shared decision making depicts conversational steps, initiated by providing support when introducing options, followed by strategies to compare and discuss trade-offs, before deliberation based on informed preferences. PMID:29109079
The Role of Climate and Socialization in Developing Interfunctional Coordination.
ERIC Educational Resources Information Center
Wooldridge, Barbara Ross; Minsky, Barbara D.
2002-01-01
Develops a model illustrating that two elements of organizational culture--climate and socialization processes--foster acceptance of organizational values and facilitate the development of interfunctional coordination, which in turn influences firm performance. (Contains 42 references.) (JOW)
Mountain-Scale Coupled Processes (TH/THC/THM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. Dixon
The purpose of this Model Report is to document the development of the Mountain-Scale Thermal-Hydrological (TH), Thermal-Hydrological-Chemical (THC), and Thermal-Hydrological-Mechanical (THM) Models and evaluate the effects of coupled TH/THC/THM processes on mountain-scale UZ flow at Yucca Mountain, Nevada. This Model Report was planned in ''Technical Work Plan (TWP) for: Performance Assessment Unsaturated Zone'' (BSC 2002 [160819], Section 1.12.7), and was developed in accordance with AP-SIII.10Q, Models. In this Model Report, any reference to ''repository'' means the nuclear waste repository at Yucca Mountain, and any reference to ''drifts'' means the emplacement drifts at the repository horizon. This Model Report provides themore » necessary framework to test conceptual hypotheses for analyzing mountain-scale hydrological/chemical/mechanical changes and predict flow behavior in response to heat release by radioactive decay from the nuclear waste repository at the Yucca Mountain site. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH Model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH Model captures mountain-scale three dimensional (3-D) flow effects, including lateral diversion at the PTn/TSw interface and mountain-scale flow patterns. The Mountain-Scale THC Model evaluates TH effects on water and gas chemistry, mineral dissolution/precipitation, and the resulting impact to UZ hydrological properties, flow and transport. The THM Model addresses changes in permeability due to mechanical and thermal disturbances in stratigraphic units above and below the repository host rock. The Mountain-Scale THM Model focuses on evaluating the changes in 3-D UZ flow fields arising out of thermal stress and rock deformation during and after the thermal periods.« less
Performance analysis of Supply Chain Management with Supply Chain Operation reference model
NASA Astrophysics Data System (ADS)
Hasibuan, Abdurrozzaq; Arfah, Mahrani; Parinduri, Luthfi; Hernawati, Tri; Suliawati; Harahap, Bonar; Rahmah Sibuea, Siti; Krianto Sulaiman, Oris; purwadi, Adi
2018-04-01
This research was conducted at PT. Shamrock Manufacturing Corpora, the company is required to think creatively to implement competition strategy by producing goods/services that are more qualified, cheaper. Therefore, it is necessary to measure the performance of Supply Chain Management in order to improve the competitiveness. Therefore, the company is required to optimize its production output to meet the export quality standard. This research begins with the creation of initial dimensions based on Supply Chain Management process, ie Plan, Source, Make, Delivery, and Return with hierarchy based on Supply Chain Reference Operation that is Reliability, Responsiveness, Agility, Cost, and Asset. Key Performance Indicator identification becomes a benchmark in performance measurement whereas Snorm De Boer normalization serves to equalize Key Performance Indicator value. Analiytical Hierarchy Process is done to assist in determining priority criteria. Measurement of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora produces SC. Responsiveness (0.649) has higher weight (priority) than other alternatives. The result of performance analysis using Supply Chain Reference Operation model of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora looks good because its monitoring system between 50-100 is good.
Development and evaluation of spatial point process models for epidermal nerve fibers.
Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila
2013-06-01
We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.
Reusing models of actors and services in smart homecare to improve sustainability.
Walderhaug, Ståle; Stav, Erlend; Mikalsen, Marius
2008-01-01
Industrial countries are faced with a growing elderly population. Homecare systems with assistive smart house technology enable elderly to live independently at home. Development of such smart home care systems is complex and expensive and there is no common reference model that can facilitate service reuse. This paper proposes reusable actor and service models based on a model-driven development process where end user organizations and domain healthcare experts from four European countries have been involved. The models, specified using UML can be reused actively as assets in the system design and development process and can reduce development costs, and improve interoperability and sustainability of systems. The models are being evaluated in the European IST project MPOWER.
Requirements engineering for cross-sectional information chain models
Hübner, U; Cruel, E; Gök, M; Garthaus, M; Zimansky, M; Remmers, H; Rienhoff, O
2012-01-01
Despite the wealth of literature on requirements engineering, little is known about engineering very generic, innovative and emerging requirements, such as those for cross-sectional information chains. The IKM health project aims at building information chain reference models for the care of patients with chronic wounds, cancer-related pain and back pain. Our question therefore was how to appropriately capture information and process requirements that are both generally applicable and practically useful. To this end, we started with recommendations from clinical guidelines and put them up for discussion in Delphi surveys and expert interviews. Despite the heterogeneity we encountered in all three methods, it was possible to obtain requirements suitable for building reference models. We evaluated three modelling languages and then chose to write the models in UML (class and activity diagrams). On the basis of the current project results, the pros and cons of our approach are discussed. PMID:24199080
A Structured Approach for Reviewing Architecture Documentation
2009-12-01
as those found in ISO 12207 [ ISO /IEC 12207 :2008] (for software engineering), ISO 15288 [ ISO /IEC 15288:2008] (for systems engineering), the Rational...Open Distributed Processing - Reference Model: Foundations ( ISO /IEC 10746-2). 1996. [ ISO /IEC 12207 :2008] International Organization for...Standardization & International Electrotechnical Commission. Sys- tems and software engineering – Software life cycle processes ( ISO /IEC 12207 ). 2008. [ ISO
75 FR 35616 - Airworthiness Directives; Air Tractor, Inc. Models AT-802 and AT-802A Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-23
... incorporation by reference of Snow Engineering Co. Process Specification 197, page 1, revised June 4, 2002; pages 2 through 4, dated February 23, 2001; and page 5, dated May 3, 2002; Snow Engineering Co. Process Specification 204, Rev. C, dated November 16, 2004; Snow Engineering Co. Service Letter 215, page 5, titled...
The reaction of secondary aluminum processing waste (referred herein to as salt cake) with water has been documented to produce heat and gases such as hydrogen, methane, and ammonia (US EPA 2015). The objective of this project was to assess the impact of salt cake disposal on MS...
Biofouling reduction in recirculating cooling systems through biofiltration of process water.
Meesters, K P H; Van Groenestijn, J W; Gerritse, J
2003-02-01
Biofouling is a serious problem in industrial recirculating cooling systems. It damages equipment, through biocorrosion, and causes clogging and increased energy consumption, through decreased heat transfer. In this research a fixed-bed biofilter was developed which removed assimilable organic carbon (AOC) from process water, thus limiting the major substrate for the growth of biofouling. The biofilter was tested in a laboratory model recirculating cooling water system, including a heat exchanger and a cooling tower. A second identical model system without a biofilter served as a reference. Both installations were challenged with organic carbon (sucrose and yeast extract) to provoke biofouling. The biofilter improved the quality of the recirculating cooling water by reducing the AOC content, the ATP concentration, bacterial numbers (30-40 fold) and the turbidity (OD660). The process of biofouling in the heat exchangers, the process water pipelines and the cooling towers, was monitored by protein increase, heat transfer resistance, and chlorine demanded for maintenance. This revealed that biofouling was lower in the system with the biofilter compared to the reference installation. It was concluded that AOC removal through biofiltration provides an attractive, environmental-friendly means to reduce biofouling in industrial cooling systems.
Liaw, Siaw-Teng; Deveny, Elizabeth; Morrison, Iain; Lewis, Bryn
2006-09-01
Using a factorial vignette survey and modeling methodology, we developed clinical and information models - incorporating evidence base, key concepts, relevant terms, decision-making and workflow needed to practice safely and effectively - to guide the development of an integrated rule-based knowledge module to support prescribing decisions in asthma. We identified workflows, decision-making factors, factor use, and clinician information requirements. The Unified Modeling Language (UML) and public domain software and knowledge engineering tools (e.g. Protégé) were used, with the Australian GP Data Model as the starting point for expressing information needs. A Web Services service-oriented architecture approach was adopted within which to express functional needs, and clinical processes and workflows were expressed in the Business Process Execution Language (BPEL). This formal analysis and modeling methodology to define and capture the process and logic of prescribing best practice in a reference implementation is fundamental to tackling deficiencies in prescribing decision support software.
Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network
Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu
2018-01-01
This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit. PMID:29765629
Simulating multi-scale oceanic processes around Taiwan on unstructured grids
NASA Astrophysics Data System (ADS)
Yu, Hao-Cheng; Zhang, Yinglong J.; Yu, Jason C. S.; Terng, C.; Sun, Weiling; Ye, Fei; Wang, Harry V.; Wang, Zhengui; Huang, Hai
2017-11-01
We validate a 3D unstructured-grid (UG) model for simulating multi-scale processes as occurred in Northwestern Pacific around Taiwan using recently developed new techniques (Zhang et al., Ocean Modeling, 102, 64-81, 2016) that require no bathymetry smoothing even for this region with prevalent steep bottom slopes and many islands. The focus is on short-term forecast for several months instead of long-term variability. Compared with satellite products, the errors for the simulated Sea-surface Height (SSH) and Sea-surface Temperature (SST) are similar to a reference data-assimilated global model. In the nearshore region, comparison with 34 tide gauges located around Taiwan indicates an average RMSE of 13 cm for the tidal elevation. The average RMSE for SST at 6 coastal buoys is 1.2 °C. The mean transport and eddy kinetic energy compare reasonably with previously published values and the reference model used to provide boundary and initial conditions. The model suggests ∼2-day interruption of Kuroshio east of Taiwan during a typhoon period. The effect of tidal mixing is shown to be significant nearshore. The multi-scale model is easily extendable to target regions of interest due to its UG framework and a flexible vertical gridding system, which is shown to be superior to terrain-following coordinates.
Large-scale seismic waveform quality metric calculation using Hadoop
NASA Astrophysics Data System (ADS)
Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.
2016-09-01
In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of 0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely require significant changes in other parts of our infrastructure. Nevertheless, we anticipate that as the technology matures and third-party tool vendors make it easier to manage and operate clusters, Hadoop (or a successor) will play a large role in our seismic data processing.
NASA Astrophysics Data System (ADS)
Li, Dongna; Li, Xudong; Dai, Jianfeng
2018-06-01
In this paper, two kinds of transient models, the viscoelastic model and the linear elastic model, are established to analyze the curing deformation of the thermosetting resin composites, and are calculated by COMSOL Multiphysics software. The two models consider the complicated coupling between physical and chemical changes during curing process of the composites and the time-variant characteristic of material performance parameters. Subsequently, the two proposed models are implemented respectively in a three-dimensional composite laminate structure, and a simple and convenient method of local coordinate system is used to calculate the development of residual stresses, curing shrinkage and curing deformation for the composite laminate. Researches show that the temperature, degree of curing (DOC) and residual stresses during curing process are consistent with the study in literature, so the curing shrinkage and curing deformation obtained on these basis have a certain referential value. Compared the differences between the two numerical results, it indicates that the residual stress and deformation calculated by the viscoelastic model are more close to the reference value than the linear elastic model.
Characterizing diurnal and seasonal cycles in monsoon systems from TRMM and CEOP observations
NASA Technical Reports Server (NTRS)
Lau, William K. M.
2006-01-01
The CEOP Inter-Monsoon Study (CIMS) is one of the two main science drivers of CEOP that aims to (a) provide better understanding of fundamental physical processes in monsoon regions around the world, and (b) demonstrate the synergy and utility of CEOP data in providing a pathway for model physics evaluation and improvement. As the data collection phase for EOP-3 and EOP-4 is being completed, two full annual cycles (2003-2004) of research-quality data sets from satellites, reference sites, and model output location time series (MOLTS) have been processed and made available for data analyses and model validation studies. This article presents preliminary results of a CIMS study aimed at the characterization and intercomparison of all major monsoon systems. The CEOP reference site data proved its value in such exercises by being a powerful tool to cross-validate the TRMM data, and to intercompare with multi-model results in ongoing work. We use 6 years (1998-2003) of pentad CEOP/TRMM data with 2deg x 2.5deg latitude-longitude grid, over the domain of interests to define the monsoon climatological diurnal and annual cycles for the East Asian Monsoon (EAM), the South Asian Monsoon (SAM), the West Africa Monsoon (WAM), the North America/Mexican Monsoon (NAM), the South American Summer Monsoon (SASM) and the Australian Monsoon (AUM). As noted, the TRMM data used in the study were cross-validated using CEOP reference site data, where applicable. Results show that the observed diurnal cycle of rain peaked around late afternoon over monsoon land, and early morning over the oceans. The diurnal cycles in models tend to peak 2-3 hours earlier than observed. The seasonal cycles of the EAM and SAM show the strongest continentality, i.e, strong control by continental processes away from the ITCZ. The WAM, and the AUM shows the less continentality, i.e, strong control by the oceanic ITCZ.
Characterizing Diurnal and Seasonal Cycles in Monsoon Systems from TRMM and CEOP Observations
NASA Technical Reports Server (NTRS)
Lau, William K. M.
2007-01-01
The CEOP Inter-Monsoon Study (CIMS) is one of the two main science drivers of CEOP that aims to (a) provide better understanding of fundamental physical processes in monsoon regions around the world, and (b) demonstrate the synergy and utility of CEOP data in providing a pathway for model physics evaluation and improvement. As the data collection phase for EOP-3 and EOP-4 is being completed, two full annual cycles (2003-2004) of research-quality data sets from satellites, reference sites, and model output location time series (MOLTS) have been processed and made available for data analyses and model validation studies. This article presents preliminary results of a CIMS study aimed at the characterization and intercomparison of all major monsoon systems. The CEOP reference site data proved its value in such exercises by being a powerful tool to cross-validate the TRMM data, and to intercompare with multi-model results in ongoing work. We use 6 years (1998-2003) of pentad CEOP/TRMM data with 2 deg x 2.5 deg. latitude-longitude grid, over the domain of interests to define the monsoon climatological diurnal and annual cycles for the East Asian Monsoon (EAM), the South Asian Monsoon (SAM), the West Africa Monsoon (WAM), the North America/Mexican Monsoon (NAM), the South American Summer Monsoon (SASM) and the Australian Monsoon (AUM). As noted, the TRMM data used in the study were cross-validated using CEOP reference site data, where applicable. Results show that the observed diurnal cycle of rain peaked around late afternoon over monsoon land, and early morning over the oceans. The diurnal cycles in models tend to peak 2-3 hours earlier than observed. The seasonal cycles of the EAM and SAM show the strongest continentality, i.e, strong control by continental processes away from the ITCZ. The WAM, and the AUM shows the less continentality, i.e, strong control by the oceanic ITCZ.
Introduction to the special issue: parsimony and redundancy in models of language.
Wiechmann, Daniel; Kerz, Elma; Snider, Neal; Jaeger, T Florian
2013-09-01
One of the most fundamental goals in linguistic theory is to understand the nature of linguistic knowledge, that is, the representations and mechanisms that figure in a cognitively plausible model of human language-processing. The past 50 years have witnessed the development and refinement of various theories about what kind of 'stuff' human knowledge of language consists of, and technological advances now permit the development of increasingly sophisticated computational models implementing key assumptions of different theories from both rationalist and empiricist perspectives. The present special issue does not aim to present or discuss the arguments for and against the two epistemological stances or discuss evidence that supports either of them (cf. Bod, Hay, & Jannedy, 2003; Christiansen & Chater, 2008; Hauser, Chomsky, & Fitch, 2002; Oaksford & Chater, 2007; O'Donnell, Hauser, & Fitch, 2005). Rather, the research presented in this issue, which we label usage-based here, conceives of linguistic knowledge as being induced from experience. According to the strongest of such accounts, the acquisition and processing of language can be explained with reference to general cognitive mechanisms alone (rather than with reference to innate language-specific mechanisms). Defined in these terms, usage-based approaches encompass approaches referred to as experience-based, performance-based and/or emergentist approaches (Amrnon & Snider, 2010; Bannard, Lieven, & Tomasello, 2009; Bannard & Matthews, 2008; Chater & Manning, 2006; Clark & Lappin, 2010; Gerken, Wilson, & Lewis, 2005; Gomez, 2002;
Mobile platform for treatment of stroke: A case study of tele-assistance.
Torres Zenteno, Arturo Henry; Fernández, Francisco; Palomino-García, Alfredo; Moniche, Francisco; Escudero, Irene; Jiménez-Hernández, M Dolores; Caballero, Auxiliadora; Escobar-Rodriguez, Germán; Parra, Carlos
2016-09-01
This article presents the technological solution of a tele-assistance process for stroke patients in acute phase in the Seville metropolitan area. The main objective of this process is to reduce time from symptom onset to treatment of acute phase stroke patients by means of telemedicine, regarding mobility between an intensive care unit ambulance and an expert center and activating the pre-hospital care phase. The technological platform covering the process has been defined following an interoperability model based on standards and with a focus on service-oriented architecture focus. Messaging definition has been designed according to the reference model of the CEN/ISO 13606, messages content follows the structure of archetypes. An XDS-b (Cross-Enterprise Document Sharing-b) transaction messaging has been designed according to Integrating the Healthcare Enterprise profile for archetype notifications and update enquiries.This research has been performed by a multidisciplinary group. The Virgen del Rocío University Hospital acts as Reference Hospital and the Public Company for Healthcare as mobility surroundings. © The Author(s) 2015.
Impacts of weighting climate models for hydro-meteorological climate change studies
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe; Caya, Daniel
2017-06-01
Weighting climate models is controversial in climate change impact studies using an ensemble of climate simulations from different climate models. In climate science, there is a general consensus that all climate models should be considered as having equal performance or in other words that all projections are equiprobable. On the other hand, in the impacts and adaptation community, many believe that climate models should be weighted based on their ability to better represent various metrics over a reference period. The debate appears to be partly philosophical in nature as few studies have investigated the impact of using weights in projecting future climate changes. The present study focuses on the impact of assigning weights to climate models for hydrological climate change studies. Five methods are used to determine weights on an ensemble of 28 global climate models (GCMs) adapted from the Coupled Model Intercomparison Project Phase 5 (CMIP5) database. Using a hydrological model, streamflows are computed over a reference (1961-1990) and future (2061-2090) periods, with and without post-processing climate model outputs. The impacts of using different weighting schemes for GCM simulations are then analyzed in terms of ensemble mean and uncertainty. The results show that weighting GCMs has a limited impact on both projected future climate in term of precipitation and temperature changes and hydrology in terms of nine different streamflow criteria. These results apply to both raw and post-processed GCM model outputs, thus supporting the view that climate models should be considered equiprobable.
NASA Astrophysics Data System (ADS)
Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz
2010-05-01
Today, crop models have a widespread application in natural sciences, because plant growth interacts and modifies the environment. Transport processes involve water and nutrient uptake from the saturated and unsaturated zone in the pedosphere. Turnover processes include the conversion of dead root biomass into organic matter. Transpiration and the interception of radiation influence the energy exchange between atmosphere and biosphere. But many more feedback mechanisms might be of interest, including erosion, soil compaction or trace gas exchanges. Most of the existing crop models have a closed structure and do not provide interfaces or code design elements for easy data transfer or process exchange with other models during runtime. Changes in the model structure, the inclusion of alternative process descriptions or the implementation of additional functionalities requires a lot of coding. The same is true if models are being upscaled from field to landscape or catchment scale. We therefore conclude that future integrated model developments would benefit from a model structure that has the following requirements: replaceability, expandability and independency. In addition to these requirements we also propose the interactivity of models, which means that models that are being coupled are highly interacting and depending on each other, i.e. the model should be open for influences from other independent models and react on influences directly. Hence, a model which consists of building blocks seems to be reasonable. The aim of the study is the presentation of the new crop model type, the plant growth model framework, PMF. The software concept refers to an object-oriented approach, which is developed with the Unified Modeling Language (UML). The model is implemented with Python, a high level object-oriented programming language. The integration of the models with a setup code enables the data transfer on the computer memory level and direct exchange of information about changing boundary conditions. The crop model concept refers to two main elements. A plant model, which represents an abstract network of plant organs and processes and a process library, which holds mathematical solutions for the growth processes. Growth processes were mainly taken from existing, well known crop models such as SUCROS and CERES. The crop specific properties of root architecture are described based on a maximum rooting depth and a vertical growth rate. The biomass distribution depends on an interactive allocation process due to the soil layers with a daily time step. In order to show the performance and capabilities of PMF, the model is coupled with the Catchment Modeling Framework (CMF) and the simple nitrogen mineralization model DeComp. The main feature of the integrated model set up is the interaction between root growth, water uptake and nitrogen supply of the soil. We show a virtual case study on the hillslope scale and spatially dependence of water and nitrogen stress based on topographic position and seasonal development.
Fiebach, Christian J; Schubotz, Ricarda I
2006-05-01
This paper proposes a domain-general model for the functional contribution of ventral premotor cortex (PMv) and adjacent Broca's area to perceptual, cognitive, and motor processing. We propose to understand this frontal region as a highly flexible sequence processor, with the PMv mapping sequential events onto stored structural templates and Broca's Area involved in more complex, hierarchical or hypersequential processing. This proposal is supported by reference to previous functional neuroimaging studies investigating abstract sequence processing and syntactic processing.
Research on key technologies of data processing in internet of things
NASA Astrophysics Data System (ADS)
Zhu, Yangqing; Liang, Peiying
2017-08-01
The data of Internet of things (IOT) has the characteristics of polymorphism, heterogeneous, large amount and processing real-time. The traditional structured and static batch processing method has not met the requirements of data processing of IOT. This paper studied a middleware that can integrate heterogeneous data of IOT, and integrated different data formats into a unified format. Designed a data processing model of IOT based on the Storm flow calculation architecture, integrated the existing Internet security technology to build the Internet security system of IOT data processing, which provided reference for the efficient transmission and processing of IOT data.
Vreck, D; Gernaey, K V; Rosen, C; Jeppsson, U
2006-01-01
In this paper, implementation of the Benchmark Simulation Model No 2 (BSM2) within Matlab-Simulink is presented. The BSM2 is developed for plant-wide WWTP control strategy evaluation on a long-term basis. It consists of a pre-treatment process, an activated sludge process and sludge treatment processes. Extended evaluation criteria are proposed for plant-wide control strategy assessment. Default open-loop and closed-loop strategies are also proposed to be used as references with which to compare other control strategies. Simulations indicate that the BM2 is an appropriate tool for plant-wide control strategy evaluation.
On a model of the processes of maintaining a technological area by a manipulator
NASA Astrophysics Data System (ADS)
Ghukasyan, A. A.; Ordyan, A. Ya
2018-04-01
The research refers to the results of mathematical modeling of the process of maintaining a technological area which consists of unstable or fixed objects (targets) and a controlled multi-link manipulator [1–9]. It is assumed that, in the maintenance process, the dynamic characteristics and the phase vector of the manipulator state can change at certain finite times depending on the mass of the cargo or instrument [10, 11]. Some controllability problems are investigated in the case where the manipulator motion on each maintenance interval is described by linear differential equations with constant coefficients and the motions of the objects are given.
Quantitative Modeling of Earth Surface Processes
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.
Dynamic reduction of dimensions of a document vector in a document search and retrieval system
Jiao, Yu; Potok, Thomas E.
2011-05-03
The method and system of the invention involves processing each new document (20) coming into the system into a document vector (16), and creating a document vector with reduced dimensionality (17) for comparison with the data model (15) without recomputing the data model (15). These operations are carried out by a first computer (11) while a second computer (12) updates the data model (18), which can be comprised of an initial large group of documents (19) and is premised on the computing an initial data model (13, 14, 15) to provide a reference point for determining document vectors from documents processed from the data stream (20).
Structural model constructing for optical handwritten character recognition
NASA Astrophysics Data System (ADS)
Khaustov, P. A.; Spitsyn, V. G.; Maksimova, E. I.
2017-02-01
The article is devoted to the development of the algorithms for optical handwritten character recognition based on the structural models constructing. The main advantage of these algorithms is the low requirement regarding the number of reference images. The one-pass approach to a thinning of the binary character representation has been proposed. This approach is based on the joint use of Zhang-Suen and Wu-Tsai algorithms. The effectiveness of the proposed approach is confirmed by the results of the experiments. The article includes the detailed description of the structural model constructing algorithm’s steps. The proposed algorithm has been implemented in character processing application and has been approved on MNIST handwriting characters database. Algorithms that could be used in case of limited reference images number were used for the comparison.
Stochastic Online Learning in Dynamic Networks under Unknown Models
2016-08-02
Repeated Game with Incomplete Information, IEEE International Conference on Acoustics, Speech, and Signal Processing. 20-MAR-16, Shanghai, China...in a game theoretic framework for the application of multi-seller dynamic pricing with unknown demand models. We formulated the problem as an...infinitely repeated game with incomplete information and developed a dynamic pricing strategy referred to as Competitive and Cooperative Demand Learning
Prospective Elementary Teachers' Perceptions of the Processes of Modeling: A Case Study
ERIC Educational Resources Information Center
Fazio, Claudio; Di Paola, Benedetto; Guastella, Ivan
2012-01-01
In this paper we discuss a study on the approaches to modeling of students of the 4-year elementary school teacher program at the University of Palermo, Italy. The answers to a specially designed questionnaire are analyzed on the basis of an "a priori" analysis made using a general scheme of reference on the epistemology of mathematics…
Towards inverse modeling of turbidity currents: The inverse lock-exchange problem
NASA Astrophysics Data System (ADS)
Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison
2011-04-01
A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.
The tropopause inversion layer in baroclinic life-cycle experiments: the role of diabatic processes
NASA Astrophysics Data System (ADS)
Kunkel, D.; Hoor, P.; Wirth, V.
2016-01-01
Recent studies on the formation of a quasi-permanent layer of enhanced static stability above the thermal tropopause revealed the contributions of dynamical and radiative processes. Dry dynamics leads to the evolution of a tropopause inversion layer (TIL), which is, however, too weak compared to observations and thus diabatic contributions are required. In this study we aim to assess the importance of diabatic processes in the understanding of TIL formation at midlatitudes. The non-hydrostatic model COSMO (COnsortium for Small-scale MOdelling) is applied in an idealized midlatitude channel configuration to simulate baroclinic life cycles. The effect of individual diabatic processes related to humidity, radiation, and turbulence is studied first to estimate the contribution of each of these processes to the TIL formation in addition to dry dynamics. In a second step these processes are stepwise included in the model to increase the complexity and finally estimate the relative importance of each process. The results suggest that including turbulence leads to a weaker TIL than in a dry reference simulation. In contrast, the TIL evolves stronger when radiation is included but the temporal evolution is still comparable to the reference. Using various cloud schemes in the model shows that latent heat release and consecutive increased vertical motions foster an earlier and stronger appearance of the TIL than in all other life cycles. Furthermore, updrafts moisten the upper troposphere and as such increase the radiative effect from water vapor. Particularly, this process becomes more relevant for maintaining the TIL during later stages of the life cycles. Increased convergence of the vertical wind induced by updrafts and by propagating inertia-gravity waves, which potentially dissipate, further contributes to the enhanced stability of the lower stratosphere. Finally, radiative feedback of ice clouds reaching up to the tropopause is identified to potentially further affect the strength of the TIL in the region of the clouds.
NASA Astrophysics Data System (ADS)
Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.
2016-06-01
For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with the DGNSS solution to better than 2.9 cm RMSE Horizontal and 5.5 cm RMSE Vertical. Such accuracies are sufficient to meet the requirements for a majority of airborne mapping applications.
NASA Astrophysics Data System (ADS)
Moreaux, Guilhem; Lemoine, Frank G.; Capdeville, Hugues; Kuzin, Sergey; Otten, Michiel; Štěpánek, Petr; Willis, Pascal; Ferrage, Pascale
2016-12-01
In preparation of the 2014 realization of the International Terrestrial Reference Frame (ITRF2014), the International DORIS Service delivered to the International Earth Rotation and Reference Systems Service a set of 1140 weekly solution files including station coordinates and Earth orientation parameters, covering the time period from 1993.0 to 2015.0. The data come from eleven DORIS satellites: TOPEX/Poseidon, SPOT2, SPOT3, SPOT4, SPOT5, Envisat, Jason-1, Jason-2, Cryosat-2, Saral and HY-2A. In their processing, the six analysis centers which contributed to the DORIS combined solution used the latest time variable gravity models and estimated DORIS ground beacon frequency variations. Furthermore, all the analysis centers but one excepted included in their processing phase center variations for ground antennas. The main objective of this study is to present the combination process and to analyze the impact of the new modeling on the performance of the new combined solution. Comparisons with the IDS contribution to ITRF2008 show that (i) the application of the DORIS ground phase center variations in the data processing shifts the combined scale upward by nearly 7-11 mm and (ii) thanks to estimation of DORIS ground beacon frequency variations, the new combined solution no longer shows any scale discontinuity in early 2002 and does not present unexplained vertical discontinuities in any station position time series. However, analysis of the new series with respect to ITRF2008 exhibits a scale increase late 2011 which is not yet explained. A new DORIS Terrestrial Reference Frame was computed to evaluate the intrinsic quality of the new combined solution. That evaluation shows that the addition of data from the new missions equipped with the latest generation of DORIS receiver (Jason-2, Cryosat-2, HY-2A, Saral), results in an internal position consistency of 10 mm or better after mid-2008.
NASA Astrophysics Data System (ADS)
Hieu, Nguyen Huu
2017-09-01
Pervaporation is a potential process for the final step of ethanol biofuel production. In this study, a mathematical model was developed based on the resistance-in-series model and a simulation was carried out using the specialized simulation software COMSOL Multiphysics to describe a tubular type pervaporation module with membranes for the dehydration of ethanol solution. The permeance of membranes, operating conditions, and feed conditions in the simulation were referred from experimental data reported previously in literature. Accordingly, the simulated temperature and density profiles of pure water and ethanol-water mixture were validated based on existing published data.
NASA Astrophysics Data System (ADS)
Bondarenko, Y.
I. Goal and Scope. Human birth rate decrease, death-rate growth and increase of mu- tagenic deviations risk take place in geopathogenic and anthropogenic hazard zones. Such zones create unfavourable conditions for reproductive process of future genera- tions. These negative trends should be considered as a protective answer of the com- plex biosocial system to the appearance of natural and anthropogenic risk factors that are unfavourable for human health. The major goals of scientific evaluation and de- crease of risk of appearance of hazardous processes on the territory of Dnipropetrovsk, along with creation of the multi-factor predictive Spirit-Energy-Information Space "SEIS" & GIS Model of ecological, genetical and population health risk in connection with dangerous bio-geodynamic processes, were: multi-factor modeling and correla- tion of natural and anthropogenic environmental changes and those of human health; determination of indicators that show the risk of destruction structures appearance on different levels of organization and functioning of the city ecosystem (geophys- ical and geochemical fields, soil, hydrosphere, atmosphere, biosphere); analysis of regularities of natural, anthropogenic, and biological rhythms' interactions. II. Meth- ods. The long spatio-temporal researches (Y. Bondarenko, 1996, 2000) have proved that the ecological, genetic and epidemiological processes are in connection with de- velopment of dangerous bio-geophysical and bio-geodynamic processes. Mathemat- ical processing of space photos, lithogeochemical and geophysical maps with use of JEIS o and ERDAS o computer systems was executed at the first stage of forma- tion of multi-layer geoinformation model "Dnipropetrovsk ARC View GIS o. The multi-factor nonlinear correlation between solar activity and cosmic ray variations, geophysical, geodynamic, geochemical, atmospheric, technological, biological, socio- economical processes and oncologic case rate frequency, general and primary popula- tion sickness cases in Dnipropetrovsk City (1.2 million persons) are described by the multi-factor predictive SEIS & GIS model of geopathogenic zones that determines the human health risk and hazards. Results and Conclusions. We have created the SEIS system and multi-factor predictive SEIS model for the analysis of phase-metric spatio- 1 temporal nonlinear correlation and variations of rhythms of human health, ecological, genetic, epidemiological risks, demographic, socio-economic, bio-geophysical, bio- geodynamic processes in geopathogenic hazard zones. Cosmophotomaps "CPM" of vegetation index, anthropogenic-landscape and landscape-geophysical human health risk of Dnipropetrovsk City present synthesis-based elements of multi-layer GIS, which include multispectral images SPOT o, maps of different geophysical, geochem- ical, anthropogenic and citogenic risk factors, maps of integral oncologic case rate frequency, general and primary population sickness cases for administrative districts. Results of multi-layer spatio-temporal correlation of geophysical field parameters and variations of population sickness rate rhythms have enabled us to state grounds and to develop medico-biological and bio-geodynamic classification of geopathogenic zones. Bio-geodynamic model has served to define contours of anthropogenic-landscape and landscape-geophysical human health risk in Dnipropetrovsk City. Biorhythmic vari- ations give foundation for understanding physiological mechanisms of organism`s adaptation to extreme helio-geophysical and bio-geodynamic environmental condi- tions, which are dictated by changes in Multi-factor Correlation Stress Field "MCSF" with deformation of 5D SEIS. Interaction between organism and environment results in continuous superpositioning of external (exogenic) Nuclear-Molecular-Cristallic "NMC" MCSF rhythms on internal (endogenic) Nuclear-Molecular-Cellular "NMCl" MCSF rhythms. Their resonance wave (energy-information) integration and disinte- gration are responsible for structural and functional state of different physiological systems. Herewith, complex restructurization of defense functions blocks the adapta- tion process and may turn to be the primary reason for phase shifting, process and biorhythms hindering, appearance of different deseases. Interaction of biorhythms with natural and anthropogenic rhythms specify the peculiar features of environ- mental adaptation of living species. Such interaction results in correlation of sea- sonal rhythms in variations of thermo-baro-geodynamic "TBG" parameters of am- bient air with toxic concentration and human health risk in Dnipropetrovsk City. Bio-geodynamic analysis of medical and demographic situations has provided for search of spatio-temporal correlation between rhythms of general and primary pop- ulation sickness cases and oncologic case rate frequency, other medico-demographic rhythms, natural processes (helio-geophysical, thermodynamic, geodynamic) and an- thropogenic processes (industrial and houschold waste disposal, toxic emissions and their concentration in ambient air). The year of 1986, the year of minimum helio- geophysical activity "2G1dG1" and maximum anthropogenic processes associated with changes in sickness and death rates of the population of Earth were synchronized. With account of quantum character of SEIS rhythms, 5 reference levels of desyn- chronized helio-geophysical and bio-geodynamic processes affecting population sick- ness rate have been specified within bio-geodynamic models. The first reference level 2 of SEIS desynchronization includes rhythms with period of 22,5 years: ... 1958,2; 1980,7; 2003,2; .... The second reference level of SEIS desynchronization includes rhythms with period of 11,25 years: ... 1980,7; 1992; 2003,2;.... The third reference level covers 5,625-years periodic rhythms2:... 1980,7; 1986,3; 1992; 1997,6; 2003,2; .... The fourth quantum reference level includes rhythms 3 with period of 2,8125 years: ... 1980,7; 1983,5; 1986,3; 1989,1; 1992; 1994,8; 1997,6; 2000,4; 2003,2; .... Rhythms with 1,40625-years period fall is fifth reference level of SEIS desynchro- nization: ...1980,7; 1982,1; 1983,5; 1984,9; 1986,3; 1987,7; 1989,1; 1990,5; 1992; 1993,3; 1994,8; 1996,2; 1997,6; 1999; 2000,4; 2001,8; 2003,2;.... Analysis of alternat- ing medical and demographic situation in Ukraine (1981-1992)and in Dnipropetrovsk (1988-1995)has allowed to back up theoretical model of various-level rhythm quan- tum, with non-linear regularities due to phase-metric spatio-temporal deformation be- ing specified. Application of new technologies of Risk Analysis, Sinthesis and SEIS Modeling at the choice of a burial place for dangerous radioactive wastes in the zone of Chernobyl nuclear disaster (Shestopalov V., Bondarenko Y...., 1998) has shown their very high efficiency in comparison with GIS Analysis. IV.Recommendations and Outlook. In order to draw a conclusion regarding bio-geodynamic modeling of spatio-temporal structure of areas where common childhood sickness rate exists, it is necessary to mention that the only thing that can favour to exact predicting of where and when important catastrophes and epidemies will take place is correct and complex bio-geodynamic modeling. Imperfection of present GIS is the result of the lack of interactive facilities for multi-factor modeling of nonlinear natural and an- thropogenic processes. Equations' coefficients calculated for some areas are often irrelevant when applied to others. In this connection there arises a number of prob- lems concerning practical application and reliability of GIS-models that are used to carry out efficient ecological monitoring. References Bondarenko Y., 1997, Drawing up Cosmophotomaps and Multi-factor Forecasting of Hazard of Development of Dan- gerous Geodynamic Processes in Dnipropetrovsk,The Technically-Natural Problems of failures and catastrophes in connection with development of dangerous geological processes, Kiev, Ukraine, 1997. Bondarenko Y., 1997, The Methodology of a State the Value of Quality of the Ground and the House Level them Ecology-Genetic-Toxic of the human health risk based on multi-layer cartographical model", Experience of application GIS - Technologies for creating Cadastral Systems, Yalta, Ukraine, 1997, p. 39-40. Shestopalov V., Bondarenko Y., Zayonts I., Rudenko Y. , Bohuslavsky A., 1998, Complexation of Structural-Geodynamical and Hydrogeological Methods of Studying Areas to Reveal Geological Structural Perspectives for Deep Isolation of Radioactive Wastes, Field Testing and Associated Modeling of Potential High-Level Nuclear Waste Geologic Disposal Sites, Berkeley, USA, 1998, p.81-82. 3
Using a logical information model-driven design process in healthcare.
Cheong, Yu Chye; Bird, Linda; Tun, Nwe Ni; Brooks, Colleen
2011-01-01
A hybrid standards-based approach has been adopted in Singapore to develop a Logical Information Model (LIM) for healthcare information exchange. The Singapore LIM uses a combination of international standards, including ISO13606-1 (a reference model for electronic health record communication), ISO21090 (healthcare datatypes), SNOMED CT (healthcare terminology) and HL7 v2 (healthcare messaging). This logic-based design approach also incorporates mechanisms for achieving bi-directional semantic interoperability.
Biological Engineering: A New Discipline for the Next Century.
ERIC Educational Resources Information Center
Tao, Bernard Y.
1993-01-01
Reviews the issues driving the need for a biological engineering discipline and summarizes current curricula at several universities. The Purdue Biochemical and Food Processing Engineering program is presented as a model for the implementation of curriculum objectives. (23 references) (Author/MCO)
Non-Sexist Education for Survival.
ERIC Educational Resources Information Center
National Education Association, Washington, DC.
This collection of 11 articles focuses on sexism in education. "The Socialization Process" refers to schools which, intentionally or not, reinforce cultural and sexual stereotypes, and maintains that schools should provide a model of nonstereotypic education. "Sex Role Stereotypes" discusses certain educational conventions which still stereotype…
NASA Astrophysics Data System (ADS)
Holzmann, Hubert; Massmann, Carolina
2015-04-01
A plenty of hydrological model types have been developed during the past decades. Most of them used a fixed design to describe the variable hydrological processes assuming to be representative for the whole range of spatial and temporal scales. This assumption is questionable as it is evident, that the runoff formation process is driven by dominant processes which can vary among different basins. Furthermore the model application and the interpretation of results is limited by data availability to identify the particular sub-processes, since most models were calibrated and validated only with discharge data. Therefore it can be hypothesized, that simpler model designs, focusing only on the dominant processes, can achieve comparable results with the benefit of less parameters. In the current contribution a modular model concept will be introduced, which allows the integration and neglection of hydrological sub-processes depending on the catchment characteristics and data availability. Key elements of the process modules refer to (1) storage effects (interception, soil), (2) transfer processes (routing), (3) threshold processes (percolation, saturation overland flow) and (4) split processes (rainfall excess). Based on hydro-meteorological observations in an experimental catchment in the Slovak region of the Carpathian mountains a comparison of several model realizations with different degrees of complexity will be discussed. A special focus is given on model parameter sensitivity estimated by Markov Chain Monte Carlo approach. Furthermore the identification of dominant processes by means of Sobol's method is introduced. It could be shown that a flexible model design - and even the simple concept - can reach comparable and equivalent performance than the standard model type (HBV-type). The main benefit of the modular concept is the individual adaptation of the model structure with respect to data and process availability and the option for parsimonious model design.
On the consistency of tomographically imaged lower mantle slabs
NASA Astrophysics Data System (ADS)
Shephard, Grace E.; Matthews, Kara J.; Hosseini, Kasra; Domeier, Mathew
2017-04-01
Over the last few decades numerous seismic tomography models have been published, each constructed with choices of data input, parameterization and reference model. The broader geoscience community is increasingly utilizing these models, or a selection thereof, to interpret Earth's mantle structure and processes. It follows that seismically identified remnants of subducted slabs have been used to validate, test or refine relative plate motions, absolute plate reference frames, and mantle sinking rates. With an increasing number of models to include, or exclude, the question arises - how robust is a given positive seismic anomaly, inferred to be a slab, across a given suite of tomography models? Here we generate a series of "vote maps" for the lower mantle by comparing 14 seismic tomography models, including 7 s-wave and 7 p-wave. Considerations include the retention or removal of the mean, the use of a consistent or variable reference model, the statistical value which defines the slab "contour", and the effect of depth interpolation. Preliminary results will be presented that address the depth, location and degree of agreement between seismic tomography models, both for the 14 combined, and between the p-waves and s-waves. The analysis also permits a broader discussion of slab volumes and subduction flux. And whilst the location and geometry of slabs, matches some the documented regions of long-lived subduction, other features do not, illustrating the importance of a robust approach to slab identification.
Nonstandard working schedules and health: the systematic search for a comprehensive model.
Merkus, Suzanne L; Holte, Kari Anne; Huysmans, Maaike A; van Mechelen, Willem; van der Beek, Allard J
2015-10-23
Theoretical models on shift work fall short of describing relevant health-related pathways associated with the broader concept of nonstandard working schedules. Shift work models neither combine relevant working time characteristics applicable to nonstandard schedules nor include the role of rest periods and recovery in the development of health complaints. Therefore, this paper aimed to develop a comprehensive model on nonstandard working schedules to address these shortcomings. A literature review was conducted using a systematic search and selection process. Two searches were performed: one associating the working time characteristics time-of-day and working time duration with health and one associating recovery after work with health. Data extracted from the models were used to develop a comprehensive model on nonstandard working schedules and health. For models on the working time characteristics, the search strategy yielded 3044 references, of which 26 met the inclusion criteria that contained 22 distinctive models. For models on recovery after work, the search strategy yielded 896 references, of which seven met the inclusion criteria containing seven distinctive models. Of the models on the working time characteristics, three combined time-of-day with working time duration, 18 were on time-of-day (i.e. shift work), and one was on working time duration. The model developed in the paper has a comprehensive approach to working hours and other work-related risk factors and proposes that they should be balanced by positive non-work factors to maintain health. Physiological processes leading to health complaints are circadian disruption, sleep deprivation, and activation that should be counterbalanced by (re-)entrainment, restorative sleep, and recovery, respectively, to maintain health. A comprehensive model on nonstandard working schedules and health was developed. The model proposes that work and non-work as well as their associated physiological processes need to be balanced to maintain good health. The model gives researchers a useful overview over the various risk factors and pathways associated with health that should be considered when studying any form of nonstandard working schedule.
NASA Technical Reports Server (NTRS)
Davis, George; Cary, Everett; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis
2003-01-01
The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.
A Distributed Simulation Software System for Multi-Spacecraft Missions
NASA Technical Reports Server (NTRS)
Burns, Richard; Davis, George; Cary, Everett
2003-01-01
The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.
Fast auto-focus scheme based on optical defocus fitting model
NASA Astrophysics Data System (ADS)
Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min
2018-04-01
An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.
Space shuttle propulsion estimation development verification
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The application of extended Kalman filtering to estimating the Space Shuttle Propulsion performance, i.e., specific impulse, from flight data in a post-flight processing computer program is detailed. The flight data used include inertial platform acceleration, SRB head pressure, SSME chamber pressure and flow rates, and ground based radar tracking data. The key feature in this application is the model used for the SRB's, which is a nominal or reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are also included for an integrated system model. Assuming uncertainty within the propulsion system model and attempts to estimate its deviations represent a new application of parameter estimation for rocket powered vehicles. Illustrations from the results of applying this estimation approach to several missions show good quality propulsion estimates.
Ko, Jordon; Su, Wen-Jun; Chien, I-Lung; Chang, Der-Ming; Chou, Sheng-Hsin; Zhan, Rui-Yu
2010-02-01
The rice straw, an agricultural waste from Asians' main provision, was collected as feedstock to convert cellulose into ethanol through the enzymatic hydrolysis and followed by the fermentation process. When the two process steps are performed sequentially, it is referred to as separate hydrolysis and fermentation (SHF). The steps can also be performed simultaneously, i.e., simultaneous saccharification and fermentation (SSF). In this research, the kinetic model parameters of the cellulose saccharification process step using the rice straw as feedstock is obtained from real experimental data of cellulase hydrolysis. Furthermore, this model can be combined with a fermentation model at high glucose and ethanol concentrations to form a SSF model. The fermentation model is based on cybernetic approach from a paper in the literature with an extension of including both the glucose and ethanol inhibition terms to approach more to the actual plants. Dynamic effects of the operating variables in the enzymatic hydrolysis and the fermentation models will be analyzed. The operation of the SSF process will be compared to the SHF process. It is shown that the SSF process is better in reducing the processing time when the product (ethanol) concentration is high. The means to improve the productivity of the overall SSF process, by properly using aeration during the batch operation will also be discussed.
Paul, Sarbajit; Chang, Junghwan
2017-01-01
This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension. PMID:28671580
Paul, Sarbajit; Chang, Junghwan
2017-07-01
This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension.
A three-talk model for shared decision making: multistage consultation process.
Elwyn, Glyn; Durand, Marie Anne; Song, Julia; Aarts, Johanna; Barr, Paul J; Berger, Zackary; Cochran, Nan; Frosch, Dominick; Galasiński, Dariusz; Gulbrandsen, Pål; Han, Paul K J; Härter, Martin; Kinnersley, Paul; Lloyd, Amy; Mishra, Manish; Perestelo-Perez, Lilisbeth; Scholl, Isabelle; Tomori, Kounosuke; Trevena, Lyndal; Witteman, Holly O; Van der Weijden, Trudy
2017-11-06
Objectives To revise an existing three-talk model for learning how to achieve shared decision making, and to consult with relevant stakeholders to update and obtain wider engagement. Design Multistage consultation process. Setting Key informant group, communities of interest, and survey of clinical specialties. Participants 19 key informants, 153 member responses from multiple communities of interest, and 316 responses to an online survey from medically qualified clinicians from six specialties. Results After extended consultation over three iterations, we revised the three-talk model by making changes to one talk category, adding the need to elicit patient goals, providing a clear set of tasks for each talk category, and adding suggested scripts to illustrate each step. A new three-talk model of shared decision making is proposed, based on "team talk," "option talk," and "decision talk," to depict a process of collaboration and deliberation. Team talk places emphasis on the need to provide support to patients when they are made aware of choices, and to elicit their goals as a means of guiding decision making processes. Option talk refers to the task of comparing alternatives, using risk communication principles. Decision talk refers to the task of arriving at decisions that reflect the informed preferences of patients, guided by the experience and expertise of health professionals. Conclusions The revised three-talk model of shared decision making depicts conversational steps, initiated by providing support when introducing options, followed by strategies to compare and discuss trade-offs, before deliberation based on informed preferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
[Lean thinking and brain-dead patient assistance in the organ donation process].
Pestana, Aline Lima; dos Santos, José Luís Guedes; Erdmann, Rolf Hermann; da Silva, Elza Lima; Erdmann, Alacoque Lorenzini
2013-02-01
Organ donation is a complex process that challenges health system professionals and managers. This study aimed to introduce a theoretical model to organize brain-dead patient assistance and the organ donation process guided by the main lean thinking ideas, which enable production improvement through planning cycles and the development of a proper environment for successful implementation. Lean thinking may make the process of organ donation more effective and efficient and may contribute to improvements in information systematization and professional qualifications for excellence of assistance. The model is configured as a reference that is available for validation and implementation by health and nursing professionals and managers in the management of potential organ donors after brain death assistance and subsequent transplantation demands.
Error modeling for differential GPS. M.S. Thesis - MIT, 12 May 1995
NASA Technical Reports Server (NTRS)
Blerman, Gregory S.
1995-01-01
Differential Global Positioning System (DGPS) positioning is used to accurately locate a GPS receiver based upon the well-known position of a reference site. In utilizing this technique, several error sources contribute to position inaccuracy. This thesis investigates the error in DGPS operation and attempts to develop a statistical model for the behavior of this error. The model for DGPS error is developed using GPS data collected by Draper Laboratory. The Marquardt method for nonlinear curve-fitting is used to find the parameters of a first order Markov process that models the average errors from the collected data. The results show that a first order Markov process can be used to model the DGPS error as a function of baseline distance and time delay. The model's time correlation constant is 3847.1 seconds (1.07 hours) for the mean square error. The distance correlation constant is 122.8 kilometers. The total process variance for the DGPS model is 3.73 sq meters.
Sensor trustworthiness in uncertain time varying stochastic environments
NASA Astrophysics Data System (ADS)
Verma, Ajay; Fernandes, Ronald; Vadakkeveedu, Kalyan
2011-06-01
Persistent surveillance applications require unattended sensors deployed in remote regions to track and monitor some physical stimulant of interest that can be modeled as output of time varying stochastic process. However, the accuracy or the trustworthiness of the information received through a remote and unattended sensor and sensor network cannot be readily assumed, since sensors may get disabled, corrupted, or even compromised, resulting in unreliable information. The aim of this paper is to develop information theory based metric to determine sensor trustworthiness from the sensor data in an uncertain and time varying stochastic environment. In this paper we show an information theory based determination of sensor data trustworthiness using an adaptive stochastic reference sensor model that tracks the sensor performance for the time varying physical feature, and provides a baseline model that is used to compare and analyze the observed sensor output. We present an approach in which relative entropy is used for reference model adaptation and determination of divergence of the sensor signal from the estimated reference baseline. We show that that KL-divergence is a useful metric that can be successfully used in determination of sensor failures or sensor malice of various types.
NASA Technical Reports Server (NTRS)
Xia, Youlong; Cosgrove, Brian A.; Mitchell, Kenneth E.; Peters-Lidard, Christa D.; Ek, Michael B.; Brewer, Michael; Mocko, David; Kumar, Sujay V.; Wei, Helin; Meng, Jesse;
2016-01-01
The purpose of this study is to evaluate the components of the land surface water budget in the four land surface models (Noah, SAC-Sacramento Soil Moisture Accounting Model, (VIC) Variable Infiltration Capacity Model, and Mosaic) applied in the newly implemented National Centers for Environmental Prediction (NCEP) operational and research versions of the North American Land Data Assimilation System version 2 (NLDAS-2). This work focuses on monthly and annual components of the water budget over 12 National Weather Service (NWS) River Forecast Centers (RFCs). Monthly gridded FLUX Network (FLUXNET) evapotranspiration (ET) from the Max-Planck Institute (MPI) of Germany, U.S. Geological Survey (USGS) total runoff (Q), changes in total water storage (dS/dt, derived as a residual by utilizing MPI ET and USGS Q in the water balance equation), and Gravity Recovery and Climate Experiment (GRACE) observed total water storage anomaly (TWSA) and change (TWSC) are used as reference data sets. Compared to these ET and Q benchmarks, Mosaic and SAC (Noah and VIC) in the operational NLDAS-2 overestimate (underestimate) mean annual reference ET and underestimate (overestimate) mean annual reference Q. The multimodel ensemble mean (MME) is closer to the mean annual reference ET and Q. An anomaly correlation (AC) analysis shows good AC values for simulated monthly mean Q and dS/dt but significantly smaller AC values for simulated ET. Upgraded versions of the models utilized in the research side of NLDAS-2 yield largely improved performance in the simulation of these mean annual and monthly water component diagnostics. These results demonstrate that the three intertwined efforts of improving (1) the scientific understanding of parameterization of land surface processes, (2) the spatial and temporal extent of systematic validation of land surface processes, and (3) the engineering-oriented aspects such as parameter calibration and optimization are key to substantially improving product quality in various land data assimilation systems.
Valle-Maldonado, Marco I; Jácome-Galarza, Irvin E; Gutiérrez-Corona, Félix; Ramírez-Díaz, Martha I; Campos-García, Jesús; Meza-Carmen, Víctor
2015-03-01
Mucor circinelloides is a dimorphic fungal model for studying several biological processes including cell differentiation (yeast-mold transitions) as well as biodiesel and carotene production. The recent release of the first draft sequence of the M. circinelloides genome, combined with the availability of analytical methods to determine patterns of gene expression, such as quantitative Reverse transcription-Polymerase chain reaction (qRT-PCR), and the development of molecular genetic tools for the manipulation of the fungus, may help identify M. circinelloides gene products and analyze their relevance in different biological processes. However, no information is available on M. circinelloides genes of stable expression that could serve as internal references in qRT-PCR analyses. One approach to solve this problem consists in the use of housekeeping genes as internal references. However, validation of the usability of these reference genes is a fundamental step prior to initiating qRT-PCR assays. This work evaluates expression of several constitutive genes by qRT-PCR throughout the morphological differentiation stages of M. circinelloides; our results indicate that tfc-1 and ef-1 are the most stable genes for qRT-PCR assays during differentiation studies and they are proposed as reference genes to carry out gene expression studies in this fungus.
Hudson, Kerry D; Farran, Emily K
2017-09-01
Successfully completing a drawing relies on the ability to accurately impose and manipulate spatial frames of reference for the object that is being drawn and for the drawing space. Typically developing (TD) children use cues such as the page boundary as a frame of reference to guide the orientation of drawn lines. Individuals with Williams syndrome (WS) typically produce incohesive drawings; this is proposed to reflect a local processing bias. Across two studies, we provide the first investigation of the effect of using a frame of reference when drawing simple lines and shapes in WS and TD groups (matched for non-verbal ability). Individuals with WS (N=17 Experiment 1; N=18 Experiment 2) and TD children matched by non-verbal ability drew single lines (Experiment One) and whole shapes (Experiment Two) within a neutral, incongruent or congruent frame. The angular deviation of the drawn line/shape, relative to the model line/shape, was measured. Both groups were sensitive to spatial frames of reference when drawing single lines and whole shapes, imposed by a frame around the drawing space. A local processing bias in WS cannot explain poor drawing performance in WS. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
She, Hsiao-Ching
2002-01-01
Examines the process of students' conceptual changes with regard to air pressure and buoyancy as a result of teaching with the dual situated learning model. Uses a model designed according to the students' ontological viewpoint on science concepts as well as the nature of these concepts. (Contains 40 references.) (Author/YDS)
ERIC Educational Resources Information Center
Woodruff, Joseph
2014-01-01
The purpose of this program evaluation was to examine the four components of the CIPP evaluation model (Context, Input, Process, and Product evaluations) in the diversity training program conceptualization and design delivered to College of Education K-12 preservice teachers at a large university in the southeastern United States (referred to in…
Testing Neuronal Accounts of Anisotropic Motion Perception with Computational Modelling
Wong, William; Chiang Price, Nicholas Seow
2014-01-01
There is an over-representation of neurons in early visual cortical areas that respond most strongly to cardinal (horizontal and vertical) orientations and directions of visual stimuli, and cardinal- and oblique-preferring neurons are reported to have different tuning curves. Collectively, these neuronal anisotropies can explain two commonly-reported phenomena of motion perception – the oblique effect and reference repulsion – but it remains unclear whether neuronal anisotropies can simultaneously account for both perceptual effects. We show in psychophysical experiments that reference repulsion and the oblique effect do not depend on the duration of a moving stimulus, and that brief adaptation to a single direction simultaneously causes a reference repulsion in the orientation domain, and the inverse of the oblique effect in the direction domain. We attempted to link these results to underlying neuronal anisotropies by implementing a large family of neuronal decoding models with parametrically varied levels of anisotropy in neuronal direction-tuning preferences, tuning bandwidths and spiking rates. Surprisingly, no model instantiation was able to satisfactorily explain our perceptual data. We argue that the oblique effect arises from the anisotropic distribution of preferred directions evident in V1 and MT, but that reference repulsion occurs separately, perhaps reflecting a process of categorisation occurring in higher-order cortical areas. PMID:25409518
A COMPARATIVE ANALYSIS OF THE RESEARCH UTILIZATION PROCESS.
ERIC Educational Resources Information Center
LIPPITT, RONALD; AND OTHERS
A SUGGESTED MODEL FOR ADEQUATE DISSEMINATION OF RESEARCH FINDINGS CONSIDERS FOUR PRIMARY BARRIERS TO EFFECTIVE COMMUNICATION--(1) DIVISION OF PERSONNEL LABOR INTO TASK ROLES, (2) INSTITUTIONAL DISTINCTIONS, (3) DEVELOPMENT OF PROFESSIONAL REFERENCE GROUPS, AND (4) GEOGRAPHICAL DIVISIONS. SUGGESTED SOLUTIONS INCLUDE LINKING SYSTEMS AND ROLES,…
Post Occupancy Evaluation of Educational Buildings and Equipment.
ERIC Educational Resources Information Center
Watson, Chris
1997-01-01
Details the post occupancy evaluation (POE) process for public buildings. POEs are used to improve design and optimize educational building and equipment use. The evaluation participants, the method used, the results and recommendations, model schools, and classroom alterations using POE are described. (9 references.) (RE)
Demodulation processes in auditory perception
NASA Astrophysics Data System (ADS)
Feth, Lawrence L.
1994-08-01
The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.
Modeling Atmospheric Aerosols in WRF/Chem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yang; Hu, X.-M.; Howell, G.
2005-06-01
In this study, three aerosol modules are tested and compared. The first module is the Modal Aerosol Dynamics Model for Europe (MADE) with the secondary organic aerosol model (SORGAM) (referred to as MADE/SORGAM). The second module is the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC). The third module is the Model of Aerosol Dynamics, Reaction, Ionization and Dissolution (MADRID). The three modules differ in terms of size representation used, chemical species treated, assumptions and numerical algorithms used. Table 1 compares the major processes among the three aerosol modules.
Bebeau, Muriel J
2009-01-01
Pretest scores were analyzed for 41 professionals referred for ethics assessment by a dental licensing board. Two were exempt from instruction based on pretest performance on five well-validated measures; 38 completed an individualized course designed to remediate deficiencies in ethical abilities. Statistically significant change (effect sizes ranging from .55 to 5.0) was observed for ethical sensitivity (DEST scores), moral reasoning (DIT scores), and role concept (essays and PROI scores). Analysis of the relationships between ability deficiencies and disciplinary actions supports the explanatory power of Rest's Four Component Model of Morality. Of particular interest is the way the model helped referred professionals deconstruct summary judgments about character and see them as capacities that can be further developed. The performance-based assessments, especially the DEST, were particularly useful in identifying shortcomings in ethical implementation. Referred practitioners highly valued the emphasis on ethical implementation, suggesting the importance of addressing what to do and say in ethically challenging cases. Finally, the required self-assessments of learning confirm the value of the process for professional renewal (i.e., a renewed commitment to professional ideals) and of enhanced abilities not only to reason about moral problems, but to implement actions.
Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia
2016-02-01
To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Nagaraja, K. S.; Kraft, R. H.
1999-01-01
The HSCT Flight Controls Group has developed longitudinal control laws, utilizing PTC aeroelastic flexible models to minimize aeroservoelastic interaction effects, for a number of flight conditions. The control law design process resulted in a higher order controller and utilized a large number of sensors distributed along the body for minimizing the flexibility effects. Processes were developed to implement these higher order control laws for performing the dynamic gust loads and flutter analyses. The processes and its validation were documented in Reference 2, for selected flight condition. The analytical results for additional flight conditions are presented in this document for further validation.
Research a Novel Integrated and Dynamic Multi-object Trade-Off Mechanism in Software Project
NASA Astrophysics Data System (ADS)
Jiang, Weijin; Xu, Yuhui
Aiming at practical requirements of present software project management and control, the paper presented to construct integrated multi-object trade-off model based on software project process management, so as to actualize integrated and dynamic trade-oil of the multi-object system of project. Based on analyzing basic principle of dynamic controlling and integrated multi-object trade-off system process, the paper integrated method of cybernetics and network technology, through monitoring on some critical reference points according to the control objects, emphatically discussed the integrated and dynamic multi- object trade-off model and corresponding rules and mechanism in order to realize integration of process management and trade-off of multi-object system.
NASA Astrophysics Data System (ADS)
Gaur, Vinod K.
The article begins with a reference to the first rational approaches to explaining the earth's magnetic field notably Elsasser's application of magneto-hydrodynamics, followed by brief outlines of the characteristics of planetary magnetic fields and of the potentially insightful homopolar dynamo in illuminating the basic issues: theoretical requirements of asymmetry and finite conductivity in sustaining the dynamo process. It concludes with sections on Dynamo modeling and, in particular, the Geo-dynamo, but not before some of the evocative physical processes mediated by the Lorentz force and the behaviour of a flux tube embedded in a perfectly conducting fluid, using Alfvén theorem, are explained, as well as the traditional intermediate approaches to investigating dynamo processes using the more tractable Kinematic models.
Programming and machining of complex parts based on CATIA solid modeling
NASA Astrophysics Data System (ADS)
Zhu, Xiurong
2017-09-01
The complex parts of the use of CATIA solid modeling programming and simulation processing design, elaborated in the field of CNC machining, programming and the importance of processing technology. In parts of the design process, first make a deep analysis on the principle, and then the size of the design, the size of each chain, connected to each other. After the use of backstepping and a variety of methods to calculate the final size of the parts. In the selection of parts materials, careful study, repeated testing, the final choice of 6061 aluminum alloy. According to the actual situation of the processing site, it is necessary to make a comprehensive consideration of various factors in the machining process. The simulation process should be based on the actual processing, not only pay attention to shape. It can be used as reference for machining.
Ludwig, T; Kern, P; Bongards, M; Wolf, C
2011-01-01
The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.
Gaia DR2 documentation Chapter 3: Astrometry
NASA Astrophysics Data System (ADS)
Hobbs, D.; Lindegren, L.; Bastian, U.; Klioner, S.; Butkevich, A.; Stephenson, C.; Hernandez, J.; Lammers, U.; Bombrun, A.; Mignard, F.; Altmann, M.; Davidson, M.; de Bruijne, J. H. J.; Fernández-Hernández, J.; Siddiqui, H.; Utrilla Molina, E.
2018-04-01
This chapter of the Gaia DR2 documentation describes the models and processing steps used for the astrometric core solution, namely, the Astrometric Global Iterative Solution (AGIS). The inputs to this solution rely heavily on the basic observables (or astrometric elementaries) which have been pre-processed and discussed in Chapter 2, the results of which were published in Fabricius et al. (2016). The models consist of reference systems and time scales; assumed linear stellar motion and relativistic light deflection; in addition to fundamental constants and the transformation of coordinate systems. Higher level inputs such as: planetary and solar system ephemeris; Gaia tracking and orbit information; initial quasar catalogues and BAM data are all needed for the processing described here. The astrometric calibration models are outlined followed by the details processing steps which give AGIS its name. We also present a basic quality assessment and validation of the scientific results (for details, see Lindegren et al. 2018).
NASA Astrophysics Data System (ADS)
Bobojć, Andrzej; Drożyner, Andrzej; Rzepecka, Zofia
2017-04-01
The work includes the comparison of performance of selected geopotential models in the dynamic orbit estimation of the satellite of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) mission. This was realized by fitting estimated orbital arcs to the official centimeter-accuracy GOCE kinematic orbit which is provided by the European Space Agency. The Cartesian coordinates of kinematic orbit were treated as observations in the orbit estimation. The initial satellite state vector components were corrected in an iterative process with respect to the J2000.0 inertial reference frame using the given geopotential model, the models describing the remaining gravitational perturbations and the solar radiation pressure. Taking the obtained solutions into account, the RMS values of orbital residuals were computed. These residuals result from the difference between the determined orbit and the reference one - the GOCE kinematic orbit. The performance of selected gravity models was also determined using various orbital arc lengths. Additionally, the RMS fit values were obtained for some gravity models truncated at given degree and order of spherical harmonic coefficients. The advantage of using the kinematic orbit is its independence from any a priori dynamical models. For the research such GOCE-independent gravity models as HUST-Grace2016s, ITU_GRACE16, ITSG-Grace2014s, ITSG-Grace2014k, GGM05S, Tongji-GRACE01, ULUX_CHAMP2013S, ITG-GRACE2010S, EIGEN-51C, EIGEN5S, EGM2008 and EGM96 were adopted.
NASA Astrophysics Data System (ADS)
Moro, M. V.; Bruckner, B.; Grande, P. L.; Tabacniks, M. H.; Bauer, P.; Primetzhofer, D.
2018-06-01
We have experimentally determined electronic stopping cross sections of vanadium for 50-2750 keV protons and for 250-6000 keV He ions by relative measurements in backscattering geometry. To check the consistency of the employed procedure we investigate how to define adequate reference stopping cross section data and chose different reference materials. To proof consistency of different reference data sets, an intercomparison is performed to test the reliability of the evaluation procedure for a wide range of energies. This process yielded consistent results. The resulting stopping cross section data for V are compared to values from the IAEA database, to the most commonly employed semi-empirical program SRIM, and to calculations according to CasP. For helium, our results show a significant deviation of up to 10% with respect to literature and to SRIM, but are in very good agreement with the CasP predictions, in particular when charge-exchange processes are included in the model.
NASA Astrophysics Data System (ADS)
Kang, Qian; Ru, Qingguo; Liu, Yan; Xu, Lingyan; Liu, Jia; Wang, Yifei; Zhang, Yewen; Li, Hui; Zhang, Qing; Wu, Qing
2016-01-01
An on-line near infrared (NIR) spectroscopy monitoring method with an appropriate multivariate calibration method was developed for the extraction process of Fu-fang Shuanghua oral solution (FSOS). On-line NIR spectra were collected through two fiber optic probes, which were designed to transmit NIR radiation by a 2 mm flange. Partial least squares (PLS), interval PLS (iPLS) and synergy interval PLS (siPLS) algorithms were used comparatively for building the calibration regression models. During the extraction process, the feasibility of NIR spectroscopy was employed to determine the concentrations of chlorogenic acid (CA) content, total phenolic acids contents (TPC), total flavonoids contents (TFC) and soluble solid contents (SSC). High performance liquid chromatography (HPLC), ultraviolet spectrophotometric method (UV) and loss on drying methods were employed as reference methods. Experiment results showed that the performance of siPLS model is the best compared with PLS and iPLS. The calibration models for AC, TPC, TFC and SSC had high values of determination coefficients of (R2) (0.9948, 0.9992, 0.9950 and 0.9832) and low root mean square error of cross validation (RMSECV) (0.0113, 0.0341, 0.1787 and 1.2158), which indicate a good correlation between reference values and NIR predicted values. The overall results show that the on line detection method could be feasible in real application and would be of great value for monitoring the mixed decoction process of FSOS and other Chinese patent medicines.
Efficient model learning methods for actor-critic control.
Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik
2012-06-01
We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.
System analyses on advanced nuclear fuel cycle and waste management
NASA Astrophysics Data System (ADS)
Cheon, Myeongguk
To evaluate the impacts of accelerator-driven transmutation of waste (ATW) fuel cycle on a geological repository, two mathematical models are developed: a reactor system analysis model and a high-level waste (HLW) conditioning model. With the former, fission products and residual trans-uranium (TRU) contained in HLW generated from a reference ATW plant operations are quantified and the reduction of TRU inventory included in commercial spent-nuclear fuel (CSNF) is evaluated. With the latter, an optimized waste loading and composition in solidification of HLW are determined and the volume reduction of waste packages associated with CSNF is evaluated. WACOM, a reactor system analysis code developed in this study for burnup calculation, is validated by ORIGEN2.1 and MCNP. WACOM is used to perform multicycle analysis for the reference lead-bismuth eutectic (LBE) cooled transmuter. By applying the results of this analysis to the reference ATW deployment scenario considered in the ATW roadmap, the HLW generated from the ATW fuel cycle is quantified and the reduction of TRU inventory contained in CSNF is evaluated. A linear programming (LP) model has been developed for determination of an optimized waste loading and composition in solidification of HLW. The model has been applied to a US-defense HLW. The optimum waste loading evaluated by the LP model was compared with that estimated by the Defense Waste Processing Facility (DWPF) in the US and a good agreement was observed. The LP model was then applied to the volume reduction of waste packages associated with CSNF. Based on the obtained reduction factors, the expansion of Yucca Mountain Repository (YMR) capacity is evaluated. It is found that with the reference ATW system, the TRU contained in CSNF could be reduced by a factor of ˜170 in terms of inventory and by a factor of ˜40 in terms of toxicity under the assumed scenario. The number of waste packages related to CSNF could be reduced by a factor of ˜8 in terms of volume and by factor of ˜10 on the basis of electricity generation when a sufficient cooling time for discharged spent fuel and zero process chemicals in HLW are assumed. The expansion factor of Yucca Mountain Repository capacity is estimated to be a factor of 2.4, much smaller than the reduction factor of CSNF waste packages, due to the existence of DOE-owned spent fuel and HLW. The YMR, however, could support 10 times greater electricity generation as long as the statutory capacity of DOE-owned SNF and HLW remains unchanged. This study also showed that the reduction of the number of waste packages could strongly be subject to the heat generation rate of HLW and the amount of process chemicals contained in HLW. For a greater reduction of the number of waste packages, a sufficient cooling time for discharged fuel and efforts to minimize the amount of process chemicals contained in HLW are crucial.
Signal processing system for electrotherapy applications
NASA Astrophysics Data System (ADS)
Płaza, Mirosław; Szcześniak, Zbigniew
2017-08-01
The system of signal processing for electrotherapeutic applications is proposed in the paper. The system makes it possible to model the curve of threshold human sensitivity to current (Dalziel's curve) in full medium frequency range (1kHz-100kHz). The tests based on the proposed solution were conducted and their results were compared with those obtained according to the assumptions of High Tone Power Therapy method and referred to optimum values. Proposed system has high dynamics and precision of mapping the curve of threshold human sensitivity to current and can be used in all methods where threshold curves are modelled.
Enhancing Teaching through Constructive Alignment.
ERIC Educational Resources Information Center
Biggs, John
1996-01-01
An approach to college-level instructional design that incorporates the principles of constructivism, termed "constructive alignment," is described. The process is then illustrated with reference to a professional development unit in educational psychology for teachers, but the model is viewed as generalizable to most units or programs in higher…
The Learning Cycle and College Science Teaching.
ERIC Educational Resources Information Center
Barman, Charles R.; Allard, David W.
Originally developed in an elementary science program called the Science Curriculum Improvement Study, the learning cycle (LC) teaching approach involves students in an active learning process modeled on four elements of Jean Piaget's theory of cognitive development: physical experience, referring to the biological growth of the central nervous…
Information Interaction: Providing a Framework for Information Architecture.
ERIC Educational Resources Information Center
Toms, Elaine G.
2002-01-01
Discussion of information architecture focuses on a model of information interaction that bridges the gap between human and computer and between information behavior and information retrieval. Illustrates how the process of information interaction is affected by the user, the system, and the content. (Contains 93 references.) (LRW)
Modeling Learning Processes in Lexical CALL.
ERIC Educational Resources Information Center
Goodfellow, Robin; Laurillard, Diana
1994-01-01
Studies the performance of a novice Spanish student using a Computer-assisted language learning (CALL) system designed for vocabulary enlargement. Results indicate that introspective evidence may be used to validate performance data within a theoretical framework that characterizes the learning approach as "surface" or "deep." (25 references)…
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
Modeling Trait Anxiety: From Computational Processes to Personality
Raymond, James G.; Steele, J. Douglas; Seriès, Peggy
2017-01-01
Computational methods are increasingly being applied to the study of psychiatric disorders. Often, this involves fitting models to the behavior of individuals with subclinical character traits that are known vulnerability factors for the development of psychiatric conditions. Anxiety disorders can be examined with reference to the behavior of individuals high in “trait” anxiety, which is a known vulnerability factor for the development of anxiety and mood disorders. However, it is not clear how this self-report measure relates to neural and behavioral processes captured by computational models. This paper reviews emerging computational approaches to the study of trait anxiety, specifying how interacting processes susceptible to analysis using computational models could drive a tendency to experience frequent anxious states and promote vulnerability to the development of clinical disorders. Existing computational studies are described in the light of this perspective and appropriate targets for future studies are discussed. PMID:28167920
Modeling Trait Anxiety: From Computational Processes to Personality.
Raymond, James G; Steele, J Douglas; Seriès, Peggy
2017-01-01
Computational methods are increasingly being applied to the study of psychiatric disorders. Often, this involves fitting models to the behavior of individuals with subclinical character traits that are known vulnerability factors for the development of psychiatric conditions. Anxiety disorders can be examined with reference to the behavior of individuals high in "trait" anxiety, which is a known vulnerability factor for the development of anxiety and mood disorders. However, it is not clear how this self-report measure relates to neural and behavioral processes captured by computational models. This paper reviews emerging computational approaches to the study of trait anxiety, specifying how interacting processes susceptible to analysis using computational models could drive a tendency to experience frequent anxious states and promote vulnerability to the development of clinical disorders. Existing computational studies are described in the light of this perspective and appropriate targets for future studies are discussed.
Piller, Nicolas; Decosterd, Isabelle; Suter, Marc R
2013-07-10
The reverse transcription quantitative real-time polymerase chain reaction (RT-qPCR) is a widely used, highly sensitive laboratory technique to rapidly and easily detect, identify and quantify gene expression. Reliable RT-qPCR data necessitates accurate normalization with validated control genes (reference genes) whose expression is constant in all studied conditions. This stability has to be demonstrated.We performed a literature search for studies using quantitative or semi-quantitative PCR in the rat spared nerve injury (SNI) model of neuropathic pain to verify whether any reference genes had previously been validated. We then analyzed the stability over time of 7 commonly used reference genes in the nervous system - specifically in the spinal cord dorsal horn and the dorsal root ganglion (DRG). These were: Actin beta (Actb), Glyceraldehyde-3-phosphate dehydrogenase (GAPDH), ribosomal proteins 18S (18S), L13a (RPL13a) and L29 (RPL29), hypoxanthine phosphoribosyltransferase 1 (HPRT1) and hydroxymethylbilane synthase (HMBS). We compared the candidate genes and established a stability ranking using the geNorm algorithm. Finally, we assessed the number of reference genes necessary for accurate normalization in this neuropathic pain model. We found GAPDH, HMBS, Actb, HPRT1 and 18S cited as reference genes in literature on studies using the SNI model. Only HPRT1 and 18S had been once previously demonstrated as stable in RT-qPCR arrays. All the genes tested in this study, using the geNorm algorithm, presented gene stability values (M-value) acceptable enough for them to qualify as potential reference genes in both DRG and spinal cord. Using the coefficient of variation, 18S failed the 50% cut-off with a value of 61% in the DRG. The two most stable genes in the dorsal horn were RPL29 and RPL13a; in the DRG they were HPRT1 and Actb. Using a 0.15 cut-off for pairwise variations we found that any pair of stable reference gene was sufficient for the normalization process. In the rat SNI model, we validated and ranked Actb, RPL29, RPL13a, HMBS, GAPDH, HPRT1 and 18S as good reference genes in the spinal cord. In the DRG, 18S did not fulfill stability criteria. The combination of any two stable reference genes was sufficient to provide an accurate normalization.
Nonlinear and Digital Man-machine Control Systems Modeling
NASA Technical Reports Server (NTRS)
Mekel, R.
1972-01-01
An adaptive modeling technique is examined by which controllers can be synthesized to provide corrective dynamics to a human operator's mathematical model in closed loop control systems. The technique utilizes a class of Liapunov functions formulated for this purpose, Liapunov's stability criterion and a model-reference system configuration. The Liapunov function is formulated to posses variable characteristics to take into consideration the identification dynamics. The time derivative of the Liapunov function generate the identification and control laws for the mathematical model system. These laws permit the realization of a controller which updates the human operator's mathematical model parameters so that model and human operator produce the same response when subjected to the same stimulus. A very useful feature is the development of a digital computer program which is easily implemented and modified concurrent with experimentation. The program permits the modeling process to interact with the experimentation process in a mutually beneficial way.
Multiple neural states of representation in short-term memory? It's a matter of attention.
Larocque, Joshua J; Lewis-Peacock, Jarrod A; Postle, Bradley R
2014-01-01
Short-term memory (STM) refers to the capacity-limited retention of information over a brief period of time, and working memory (WM) refers to the manipulation and use of that information to guide behavior. In recent years it has become apparent that STM and WM interact and overlap with other cognitive processes, including attention (the selection of a subset of information for further processing) and long-term memory (LTM-the encoding and retention of an effectively unlimited amount of information for a much longer period of time). Broadly speaking, there have been two classes of memory models: systems models, which posit distinct stores for STM and LTM (Atkinson and Shiffrin, 1968; Baddeley and Hitch, 1974); and state-based models, which posit a common store with different activation states corresponding to STM and LTM (Cowan, 1995; McElree, 1996; Oberauer, 2002). In this paper, we will focus on state-based accounts of STM. First, we will consider several theoretical models that postulate, based on considerable behavioral evidence, that information in STM can exist in multiple representational states. We will then consider how neural data from recent studies of STM can inform and constrain these theoretical models. In the process we will highlight the inferential advantage of multivariate, information-based analyses of neuroimaging data (fMRI and electroencephalography (EEG)) over conventional activation-based analysis approaches (Postle, in press). We will conclude by addressing lingering questions regarding the fractionation of STM, highlighting differences between the attention to information vs. the retention of information during brief memory delays.
Choi, Young-Seon; Lawler, Erin; Boenecke, Clayton A; Ponatoski, Edward R; Zimring, Craig M
2011-12-01
This paper reports a review that assessed the effectiveness and characteristics of fall prevention interventions implemented in hospitals. A multi-systemic fall prevention model that establishes a practical framework was developed from the evidence. Falls occur through complex interactions between patient-related and environmental risk factors, suggesting a need for multifaceted fall prevention approaches that address both factors. We searched Medline, CINAHL, PsycInfo and the Web of Science databases for references published between January 1990 and June 2009 and scrutinized secondary references from acquired papers. Due to the heterogeneity of interventions and populations, we conducted a quantitative systematic review without a meta-analysis and used a narrative summary to report findings. From the review, three distinct characteristics of fall prevention interventions emerged: (1) the physical environment, (2) the care process and culture and (3) technology. While clinically significant evidence shows the efficacy of environment-related interventions in reducing falls and fall-related injuries, the literature identified few hospitals that had introduced environment-related interventions in their multifaceted fall intervention strategies. Using the multi-systemic fall prevention model, hospitals should promote a practical strategy that benefits from the collective effects of the physical environment, the care process and culture and technology to prevent falls and fall-related injuries. By doing so, they can more effectively address the various risk factors for falling and therefore, prevent falls. Studies that test the proposed model need to be conducted to establish the efficacy of the model in practice. © 2011 The Authors. Journal of Advanced Nursing © 2011 Blackwell Publishing Ltd.
NASA Technical Reports Server (NTRS)
Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.
2005-01-01
A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.
NASA Astrophysics Data System (ADS)
Zhu, S.; Sartelet, K. N.; Seigneur, C.
2015-06-01
The Size-Composition Resolved Aerosol Model (SCRAM) for simulating the dynamics of externally mixed atmospheric particles is presented. This new model classifies aerosols by both composition and size, based on a comprehensive combination of all chemical species and their mass-fraction sections. All three main processes involved in aerosol dynamics (coagulation, condensation/evaporation and nucleation) are included. The model is first validated by comparison with a reference solution and with results of simulations using internally mixed particles. The degree of mixing of particles is investigated in a box model simulation using data representative of air pollution in Greater Paris. The relative influence on the mixing state of the different aerosol processes (condensation/evaporation, coagulation) and of the algorithm used to model condensation/evaporation (bulk equilibrium, dynamic) is studied.
Sacristan, C J; Dupont, T; Sicot, O; Leclaire, P; Verdière, K; Panneton, R; Gong, X L
2016-10-01
The acoustic properties of an air-saturated macroscopically inhomogeneous aluminum foam in the equivalent fluid approximation are studied. A reference sample built by forcing a highly compressible melamine foam with conical shape inside a constant diameter rigid tube is studied first. In this process, a radial compression varying with depth is applied. With the help of an assumption on the compressed pore geometry, properties of the reference sample can be modelled everywhere in the thickness and it is possible to use the classical transfer matrix method as theoretical reference. In the mixture approach, the material is viewed as a mixture of two known materials placed in a patchwork configuration and with proportions of each varying with depth. The properties are derived from the use of a mixing law. For the reference sample, the classical transfer matrix method is used to validate the experimental results. These results are used to validate the mixture approach. The mixture approach is then used to characterize a porous aluminium for which only the properties of the external faces are known. A porosity profile is needed and is obtained from the simulated annealing optimization process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S; Politte, D; O’Sullivan, J
2016-06-15
Purpose: This work aims at reducing the uncertainty in proton stopping power (SP) estimation by a novel combination of a linear, separable basis vector model (BVM) for stopping power calculation (Med Phys 43:600) and a statistical, model-based dual-energy CT (DECT) image reconstruction algorithm (TMI 35:685). The method was applied to experimental data. Methods: BVM assumes the photon attenuation coefficients, electron densities, and mean excitation energies (I-values) of unknown materials can be approximated by a combination of the corresponding quantities of two reference materials. The DECT projection data for a phantom with 5 different known materials was collected on a Philipsmore » Brilliance scanner using two scans at 90 kVp and 140 kVp. The line integral alternating minimization (LIAM) algorithm was used to recover the two BVM coefficient images using the measured source spectra. The proton stopping powers are then estimated from the Bethe-Bloch equation using electron densities and I-values derived from the BVM coefficients. The proton stopping powers and proton ranges for the phantom materials estimated via our BVM based DECT method are compared to ICRU reference values and a post-processing DECT analysis (Yang PMB 55:1343) applied to vendorreconstructed images using the Torikoshi parametric fit model (tPFM). Results: For the phantom materials, the average stopping power estimations for 175 MeV protons derived from our method are within 1% of the ICRU reference values (except for Teflon with a 1.48% error), with an average standard deviation of 0.46% over pixels. The resultant proton ranges agree with the reference values within 2 mm. Conclusion: Our principled DECT iterative reconstruction algorithm, incorporating optimal beam hardening and scatter corrections, in conjunction with a simple linear BVM model, achieves more accurate and robust proton stopping power maps than the post-processing, nonlinear tPFM based DECT analysis applied to conventional reconstructions of low and high energy scans. Funding Support: NIH R01CA 75371; NCI grant R01 CA 149305.« less
Effects of Ionization-Induced Smog on Air Chemistry.
1987-01-30
After leaving the monochromater, the beam was incident on the window of a HgCdTe Detector ( Infrared Assoc., Inc.), hereafter referred to as the...regulated power supply (Model 68735). The intensity of infrared light produced was measured by a thermopile detector (ORIEL Model 7102), hereafter...process measurements from the bolometer and detector and to control the stepper motor were located outside of the exposure room so as to reduce noise
2011-09-08
slab model Reference: Knutson, T.R., A.J. Broccoli , B.J. Soden, R. Gudgel, R. Hemler, S.A. Weber, and M. Winton, 2005: Equilibrium sensitivity of...Stouffer, R.J, A.J Broccoli , T.L. Delworth, K.W. Dixon, R. Gudgel, I. Held, T. Knutson, H-C Lee, M.D. Schwarzkopf, B. Soden, M.J. Spelman, M. Winton
Estimating Air-Manganese Exposures in Two Ohio Towns ...
Manganese (Mn), a nutrient required for normal metabolic function, is also a persistent air pollutant and a known neurotoxin at high concentrations. Elevated exposures can result in a number of motor and cognitive deficits. Quantifying chronic personal exposures in residential populations studied by environmental epidemiologists can be time-consuming and expensive. We developed an approach for quantifying chronic exposures for two towns (Marietta and East Liverpool, Ohio) with elevated air Mn concentrations (air-Mn) related to ambient emissions from industrial processes. This was accomplished through the use of measured and modeled data in the communities studied. A novel approach was developed because one of the facilities lacked emissions data for the purposes of modeling. A unit emission rate was assumed over the surface area of both source facilities, and offsite concentrations at receptor residences and air monitoring sites were estimated with the American Meteorological Society/Environmental Protection Agency Regulatory Model (AERMOD). Ratios of all modeled receptor points were created, and a long-running air monitor was identified as a reference location. All ratios were normalized to the reference location. Long-term averages at all residential receptor points were calculated using modeled ratios and data from the reference monitoring location. Modeled five-year average air-Mn exposures ranged from 0.03-1.61 µg/m3 in Marietta and 0.01-6.32 µg/m3 in E
Lu, Hai-yan; Wang, Shi-sheng; Cai, Rui; Meng, Yu; Xie, Xin; Zhao, Wei-jie
2012-02-05
With the application of near-infrared spectroscopy (NIRS), a convenient and rapid method for determination of alkaloids in Corydalis Tuber extract and classification for samples from different locations have been developed. Five different samples were collected according to their geographical origin, 2-Der with smoothing point of 17 was applied as the spectral pre-treatment, and the 1st to scaling range algorithm was adjusted to be optimal approach, classification model was constructed over the wavelength range of 4582-4270 cm⁻¹, 5562-4976 cm⁻¹ and 7000-7467 cm⁻¹ with a great recognition rate. For prediction model, partial least squares (PLS) algorithm was utilized referring to HPLC-UV reference method, the optimum models were obtained after adjustment. Pre-processing methods of calibration models were COE for protopine and min-max normalization for palmatine and MSC for tetrahydropalmatine, respectively. The root mean square errors of cross-validation (RMSECV) for protopine, palmatine, tetrahydropalmatine were 0.884, 1.83, 3.23 mg/g. The correlation coefficients (R²) were 99.75, 98.41 and 97.34%. T test was applied, in the model of tetrahydropalmatine; there is no significant difference between NIR prediction and HPLC reference method at 95% confidence interval with t=0.746
Large-scale seismic waveform quality metric calculation using Hadoop
Magana-Zook, Steven; Gaylord, Jessie M.; Knapp, Douglas R.; ...
2016-05-27
Here in this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/Omore » performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. We conducted these experiments multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely require significant changes in other parts of our infrastructure. Nevertheless, we anticipate that as the technology matures and third-party tool vendors make it easier to manage and operate clusters, Hadoop (or a successor) will play a large role in our seismic data processing.« less
Large-scale seismic waveform quality metric calculation using Hadoop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magana-Zook, Steven; Gaylord, Jessie M.; Knapp, Douglas R.
Here in this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/Omore » performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. We conducted these experiments multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely require significant changes in other parts of our infrastructure. Nevertheless, we anticipate that as the technology matures and third-party tool vendors make it easier to manage and operate clusters, Hadoop (or a successor) will play a large role in our seismic data processing.« less
Perceptual video quality assessment in H.264 video coding standard using objective modeling.
Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu
2014-01-01
Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.
Modeling and closed-loop control of hypnosis by means of bispectral index (BIS) with isoflurane.
Gentilini, A; Rossoni-Gerosa, M; Frei, C W; Wymann, R; Morari, M; Zbinden, A M; Schnider, T W
2001-08-01
A model-based closed-loop control system is presented to regulate hypnosis with the volatile anesthetic isoflurane. Hypnosis is assessed by means of the bispectral index (BIS), a processed parameter derived from the electroencephalogram. Isoflurane is administered through a closed-circuit respiratory system. The model for control was identified on a population of 20 healthy volunteers. It consists of three parts: a model for the respiratory system, a pharmacokinetic model and a pharmacodynamic model to predict BIS at the effect compartment. A cascaded internal model controller is employed. The master controller compares the actual BIS and the reference value set by the anesthesiologist and provides expired isoflurane concentration references to the slave controller. The slave controller maneuvers the fresh gas anesthetic concentration entering the respiratory system. The controller is designed to adapt to different respiratory conditions. Anti-windup measures protect against performance degradation in the event of saturation of the input signal. Fault detection schemes in the controller cope with BIS and expired concentration measurement artifacts. The results of clinical studies on humans are presented.
Weiss, Michael
2017-06-01
Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.
2015-12-01
FINAL REPORT Integrated spatial models of non-native plant invasion, fire risk, and wildlife habitat to support conservation of military and...as reflecting the official policy or position of the Department of Defense. Reference herein to any specific commercial product, process, or service...2. REPORT TYPE Final 3. DATES COVERED (From - To) 26/4/2010 – 25/10/2015 4. TITLE AND SUBTITLE Integrated Spatial Models of Non-Native Plant
Framework for assessing key variable dependencies in loose-abrasive grinding and polishing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.S.; Aikens, D.M.; Brown, N.J.
1995-12-01
This memo describes a framework for identifying all key variables that determine the figuring performance of loose-abrasive lapping and polishing machines. This framework is intended as a tool for prioritizing R&D issues, assessing the completeness of process models and experimental data, and for providing a mechanism to identify any assumptions in analytical models or experimental procedures. Future plans for preparing analytical models or performing experiments can refer to this framework in establishing the context of the work.
Fluet, Norman R.; Reis, Michael D.; Stern, Charles H.; Thompson, Alexander W.; Jolly, Gillian A.
2016-01-01
The integration of behavioral health services in primary care has been referred to in many ways, but ultimately refers to common structures and processes. Behavioral health is integrated into primary care because it increases the effectiveness and efficiency of providing care and reduces costs in the care of primary care patients. Reimbursement is one factor, if not the main factor, that determines the level of integration that can be achieved. The federal health reform agenda supports changes that will eventually permit behavioral health to be fully integrated and will allow the health of the population to be the primary target of intervention. In an effort to develop more integrated services at Baylor Scott and White Healthcare, models of integration are reviewed and the advantages and disadvantages of each model are discussed. Recommendations to increase integration include adopting a disease management model with care management, planned guideline-based stepped care, follow-up, and treatment monitoring. Population-based interventions can be completed at the pace of the development of alternative reimbursement methods. The program should be based upon patient-centered medical home standards, and research is needed throughout the program development process. PMID:27034543
Reynolds, L P; Borowicz, P P; Caton, J S; Vonnahme, K A; Luther, J S; Hammer, C J; Maddock Carlin, K R; Grazul-Bilska, A T; Redmer, D A
2010-04-01
Developmental programming refers to the programming of various bodily systems and processes by a stressor of the maternal system during pregnancy or during the neonatal period. Such stressors include nutritional stress, multiple pregnancy (i.e., increased numbers of fetuses in the gravid uterus), environmental stress (e.g., high environmental temperature, high altitude, prenatal steroid exposure), gynecological immaturity, and maternal or fetal genotype. Programming refers to impaired function of numerous bodily systems or processes, leading to poor growth, altered body composition, metabolic dysfunction, and poor productivity (e.g., poor growth, reproductive dysfunction) of the offspring throughout their lifespan and even across generations. A key component of developmental programming seems to be placental dysfunction, leading to altered fetal growth and development. We discuss various large animal models of developmental programming and how they have and will continue to contribute to our understanding of the mechanisms underlying altered placental function and developmental programming, and, further, how large animal models also will be critical to the identification and application of therapeutic strategies that will alleviate the negative consequences of developmental programming to improve offspring performance in livestock production and human medicine.
Language and vertical space: on the automaticity of language action interconnections.
Dudschig, Carolin; de la Vega, Irmgard; De Filippis, Monica; Kaup, Barbara
2014-09-01
Grounded models of language processing propose a strong connection between language and sensorimotor processes (Barsalou, 1999, 2008; Glenberg & Kaschak, 2002). However, it remains unclear how functional and automatic these connections are for understanding diverse sets of words (Ansorge, Kiefer, Khalid, Grassl, & König, 2010). Here, we investigate whether words referring to entities with a typical location in the upper or lower visual field (e.g., sun, ground) automatically influence subsequent motor responses even when language-processing levels are kept minimal. The results show that even subliminally presented words influence subsequent actions, as can be seen in a reversed compatibility effect. These finding have several implications for grounded language processing models. Specifically, these results suggest that language-action interconnections are not only the result of strategic language processes, but already play an important role during pre-attentional language processing stages. Copyright © 2014 Elsevier Ltd. All rights reserved.
Precomputing Process Noise Covariance for Onboard Sequential Filters
NASA Technical Reports Server (NTRS)
Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell
2017-01-01
Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis studies is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.
Precomputing Process Noise Covariance for Onboard Sequential Filters
NASA Technical Reports Server (NTRS)
Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell
2017-01-01
Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory, using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis publications is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.
On Meaningful Measurement: Concepts, Technology and Examples.
ERIC Educational Resources Information Center
Cheung, K. C.
This paper discusses how concepts and procedural skills in problem-solving tasks, as well as affects and emotions, can be subjected to meaningful measurement (MM), based on a multisource model of learning and a constructivist information-processing theory of knowing. MM refers to the quantitative measurement of conceptual and procedural knowledge…
Welsh Bilinguals' English Spelling: An Error Analysis.
ERIC Educational Resources Information Center
James, Carl; And Others
1993-01-01
The extent to which the second-language English spelling of young Welsh-English bilinguals is systematically idiosyncratic was examined from free compositions written by 10- to 11-year-old children. A model is presented of the second-language spelling process in the form of a "decision tree." (Contains 29 references.) (Author/LB)
Classroom Evaluation of a Rapid Prototyping System.
ERIC Educational Resources Information Center
Tennyson, Stephen A.; Krueger, Thomas J.
2001-01-01
Introduces rapid prototyping which creates virtual models through a variety of automated material additive processes. Relates experiences using JP System 5 in freshman and sophomore engineering design graphics courses. Analyzes strengths and limitations of the JP System 5 and discusses how to use it effectively. (Contains 15 references.)…
Engaging Students with Disabilities: Using Student Response Technology in Elementary Classrooms
ERIC Educational Resources Information Center
Watson, Tiffany
2017-01-01
Student engagement refers to the behaviors that suggest whether a student is interested in the learning process. Finn (1989) developed the participation-identification model to explain the correlation between student engagement and identification with school, suggesting that increased participation leads to an increased sense of belongingness in…
Definitions: Health, Fitness, and Physical Activity.
ERIC Educational Resources Information Center
Corbin, Charles B.; Pangrazi, Robert P.; Franks, B. Don
2000-01-01
This paper defines a variety of fitness components, using a simple multidimensional hierarchical model that is consistent with recent definitions in the literature. It groups the definitions into two broad categories: product and process. Products refer to states of being such as physical fitness, health, and wellness. They are commonly referred…
The next generation of training for Arabidopsis researchers: bioinformatics and quantitative biology
USDA-ARS?s Scientific Manuscript database
It has been more than 50 years since Arabidopsis (Arabidopsis thaliana) was first introduced as a model organism to understand basic processes in plant biology. A well-organized scientific community has used this small reference plant species to make numerous fundamental plant biology discoveries (P...
Conversational Coherency. Technical Report No. 95.
ERIC Educational Resources Information Center
Reichman, Rachel
To analyze the process involved in maintaining conversational coherency, the study described in this paper used a construct called a "context space" that grouped utterances referring to a single issue or episode. The paper defines the types of context spaces, parses individual conversations to identify the underlying model or structure,…
Decision Making: New Paradigm for Education.
ERIC Educational Resources Information Center
Wales, Charles E.; And Others
1986-01-01
Defines education's new paradigm as schooling based on decision making, the critical thinking skills serving it, and the knowledge base supporting it. Outlines a model decision-making process using a hypothetical breakfast problem; a late riser chooses goals, generates ideas, develops an action plan, and implements and evaluates it. (4 references)…
Developing Interdisciplinary Units: Strategies and Examples.
ERIC Educational Resources Information Center
McDonald, Jacqueline; Czerniak, Charlene
1994-01-01
A theme of sharks is used to illustrate the process of developing interdisciplinary units for middle school instruction, including a model for teams of teachers to follow. As activities evolve, a concept map is created to illustrate relationships and integration of ideas and activities for various disciplines. (Contains 10 references.) (MKR)
Metaphor, computing systems, and active learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, J.M.; Mack, R.L.
1982-01-01
The authors discuss the learning process that is directed towards particular goals and is initiated by the learner, through which metaphors become relevant and effective in learning. This allows an analysis of metaphors that explains why metaphors are incomplete and open-ended, and how this stimulates the construction of mental models. 9 references.
An Investigation of the Relationship between Cognitive Reactivity and Rumination
ERIC Educational Resources Information Center
Moulds, Michelle L.; Kandris, Eva; Williams, Alishia D.; Lang, Tamara; Yap, Carol; Hoffmeister, Karolin
2008-01-01
Teasdale's (Teasdale, J.D. (1988). Cognitive vulnerability to persistent depression. "Cognition and Emotion," 2, 247-274) differential activation hypothesis refers to the ease with which maladaptive cognitive processes are triggered by mild dysphoria as "cognitive reactivity." Supporting this model is evidence of a differential association between…
Amy C. Ganguli; Johathan B. Haufler; Carolyn A. Mehl; Jimmie D. Chew
2011-01-01
Understanding historical ecosystem diversity and wildlife habitat quality can provide a useful reference for managing and restoring rangeland ecosystems. We characterized historical ecosystem diversity using available empirical data, expert opinion, and the spatially explicit vegetation dynamics model SIMPPLLE (SIMulating Vegetative Patterns and Processes at Landscape...
Requirements for data integration platforms in biomedical research networks: a reference model.
Ganzinger, Matthias; Knaup, Petra
2015-01-01
Biomedical research networks need to integrate research data among their members and with external partners. To support such data sharing activities, an adequate information technology infrastructure is necessary. To facilitate the establishment of such an infrastructure, we developed a reference model for the requirements. The reference model consists of five reference goals and 15 reference requirements. Using the Unified Modeling Language, the goals and requirements are set into relation to each other. In addition, all goals and requirements are described textually in tables. This reference model can be used by research networks as a basis for a resource efficient acquisition of their project specific requirements. Furthermore, a concrete instance of the reference model is described for a research network on liver cancer. The reference model is transferred into a requirements model of the specific network. Based on this concrete requirements model, a service-oriented information technology architecture is derived and also described in this paper.
Strategic Technology Selection and Classification in Multimodel Environments
2008-05-01
processes. Specifically, the organization chose a blend of CMMI (portions of which were being implemented in engineering), ISO 12207 and the SA CMM...whole- product support Improve product field performance Strategy to accomplish goal Reference models CMMI, SA CMM, IEEE/ ISO 12207 Leverage CMMI...ro u p Lean Six Sigma SOX ISO /IEC 15288 ISO 9000 CMMI ISO 12207 P-CMM AS9100 IEEE 830 PSMIDEAL PPSLockheed Martin Integrated Enterprise Process Agile
40 CFR 63.1322 - Batch process vents-reference control technology.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 12 2013-07-01 2013-07-01 false Batch process vents-reference control... (CONTINUED) National Emission Standards for Hazardous Air Pollutant Emissions: Group IV Polymers and Resins § 63.1322 Batch process vents—reference control technology. (a) Batch process vents. The owner or...
40 CFR 63.1322 - Batch process vents-reference control technology.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 12 2012-07-01 2011-07-01 true Batch process vents-reference control technology. 63.1322 Section 63.1322 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Batch process vents—reference control technology. (a) Batch process vents. The owner or operator of a...
40 CFR 63.1322 - Batch process vents-reference control technology.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 12 2014-07-01 2014-07-01 false Batch process vents-reference control technology. 63.1322 Section 63.1322 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... § 63.1322 Batch process vents—reference control technology. (a) Batch process vents. The owner or...
40 CFR 63.1322 - Batch process vents-reference control technology.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 11 2011-07-01 2011-07-01 false Batch process vents-reference control technology. 63.1322 Section 63.1322 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Batch process vents—reference control technology. (a) Batch process vents. The owner or operator of a...
40 CFR 63.1322 - Batch process vents-reference control technology.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 11 2010-07-01 2010-07-01 true Batch process vents-reference control technology. 63.1322 Section 63.1322 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Batch process vents—reference control technology. (a) Batch process vents. The owner or operator of a...
Sapunar, Damir; Grković, Ivica; Lukšić, Davor; Marušić, Matko
2016-05-01
Our aim was to describe a comprehensive model of internal quality management (QM) at a medical school founded on the business process analysis (BPA) software tool. BPA software tool was used as the core element for description of all working processes in our medical school, and subsequently the system served as the comprehensive model of internal QM. The quality management system at the University of Split School of Medicine included the documentation and analysis of all business processes within the School. The analysis revealed 80 weak points related to one or several business processes. A precise analysis of medical school business processes allows identification of unfinished, unclear and inadequate points in these processes, and subsequently the respective improvements and increase of the QM level and ultimately a rationalization of the institution's work. Our approach offers a potential reference model for development of common QM framework allowing a continuous quality control, i.e. the adjustments and adaptation to contemporary educational needs of medical students. Copyright © 2016 by Academy of Sciences and Arts of Bosnia and Herzegovina.
The unrealized promise of infant statistical word-referent learning
Smith, Linda B.; Suanda, Sumarga H.; Yu, Chen
2014-01-01
Recent theory and experiments offer a new solution as to how infant learners may break into word learning, by using cross-situational statistics to find the underlying word-referent mappings. Computational models demonstrate the in-principle plausibility of this statistical learning solution and experimental evidence shows that infants can aggregate and make statistically appropriate decisions from word-referent co-occurrence data. We review these contributions and then identify the gaps in current knowledge that prevent a confident conclusion about whether cross-situational learning is the mechanism through which infants break into word learning. We propose an agenda to address that gap that focuses on detailing the statistics in the learning environment and the cognitive processes that make use of those statistics. PMID:24637154
Application of 3d Model of Cultural Relics in Virtual Restoration
NASA Astrophysics Data System (ADS)
Zhao, S.; Hou, M.; Hu, Y.; Zhao, Q.
2018-04-01
In the traditional cultural relics splicing process, in order to identify the correct spatial location of the cultural relics debris, experts need to manually splice the existing debris. The repeated contact between debris can easily cause secondary damage to the cultural relics. In this paper, the application process of 3D model of cultural relic in virtual restoration is put forward, and the relevant processes and ideas are verified with the example of Terracotta Warriors data. Through the combination of traditional cultural relics restoration methods and computer virtual reality technology, virtual restoration of high-precision 3D models of cultural relics can provide a scientific reference for virtual restoration, avoiding the secondary damage to the cultural relics caused by improper restoration. The efficiency and safety of the preservation and restoration of cultural relics have been improved.
Integrative change model in psychotherapy: Perspectives from Indian thought.
Manickam, L S S
2013-01-01
Different psychotherapeutic approaches claim positive changes in patients as a result of therapy. Explanations related to the change process led to different change models. Some of the change models are experimentally oriented whereas some are theoretical. Apart from the core models of behavioral, psychodynamic, humanistic, cognitive and spiritually oriented models there are specific models, within psychotherapy that explains the change process. Integrative theory of a person as depicted in Indian thought provides a common ground for the integration of various therapies. Integrative model of change based on Indian thought, with specific reference to psychological concepts in Upanishads, Ayurveda, Bhagavad Gita and Yoga are presented. Appropriate psychological tools may be developed in order to help the clinicians to choose the techniques that match the problem and the origin of the dimension. Explorations have to be conducted to develop more techniques that are culturally appropriate and clinically useful. Research has to be initiated to validate the identified concepts.
Integrative change model in psychotherapy: Perspectives from Indian thought
Manickam, L. S. S
2013-01-01
Different psychotherapeutic approaches claim positive changes in patients as a result of therapy. Explanations related to the change process led to different change models. Some of the change models are experimentally oriented whereas some are theoretical. Apart from the core models of behavioral, psychodynamic, humanistic, cognitive and spiritually oriented models there are specific models, within psychotherapy that explains the change process. Integrative theory of a person as depicted in Indian thought provides a common ground for the integration of various therapies. Integrative model of change based on Indian thought, with specific reference to psychological concepts in Upanishads, Ayurveda, Bhagavad Gita and Yoga are presented. Appropriate psychological tools may be developed in order to help the clinicians to choose the techniques that match the problem and the origin of the dimension. Explorations have to be conducted to develop more techniques that are culturally appropriate and clinically useful. Research has to be initiated to validate the identified concepts. PMID:23858275
Direct-to-digital holography reduction of reference hologram noise and fourier space smearing
Voelkl, Edgar
2006-06-27
Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.
Nielsen, J D; Dean, C B
2008-09-01
A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.
A brief introduction to mixed effects modelling and multi-model inference in ecology
Donaldson, Lynda; Correa-Cano, Maria Eugenia; Goodwin, Cecily E.D.
2018-01-01
The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions. PMID:29844961
A brief introduction to mixed effects modelling and multi-model inference in ecology.
Harrison, Xavier A; Donaldson, Lynda; Correa-Cano, Maria Eugenia; Evans, Julian; Fisher, David N; Goodwin, Cecily E D; Robinson, Beth S; Hodgson, David J; Inger, Richard
2018-01-01
The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions.
Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process
NASA Astrophysics Data System (ADS)
Migawa, Klaudiusz
2012-12-01
The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.
NASA Astrophysics Data System (ADS)
Tanimoto, Jun
2016-11-01
Inspired by the commonly observed real-world fact that people tend to behave in a somewhat random manner after facing interim equilibrium to break a stalemate situation whilst seeking a higher output, we established two models of the spatial prisoner's dilemma. One presumes that an agent commits action errors, while the other assumes that an agent refers to a payoff matrix with an added random noise instead of an original payoff matrix. A numerical simulation revealed that mechanisms based on the annealing of randomness due to either the action error or the payoff noise could significantly enhance the cooperation fraction. In this study, we explain the detailed enhancement mechanism behind the two models by referring to the concepts that we previously presented with respect to evolutionary dynamic processes under the names of enduring and expanding periods.
NASA Astrophysics Data System (ADS)
Kunkel, D.; Hoor, P.; Wirth, V.
2015-08-01
Recent studies on the formation of a quasi-permanent layer of enhanced static stability above the thermal tropopause revealed the contributions of dynamical and radiative processes. Dry dynamics lead to the evolution of a tropopause inversion layer (TIL) which is, however, too weak compared to observations and thus diabatic contributions are required. In this study we aim to assess the importance of diabatic as well as mixing processes in the understanding of TIL formation at midlatitudes. The non-hydrostatic model COSMO is applied in an idealized mid-latitude channel configuration to simulate baroclinic life cycles. The effect of individual diabatic, i.e. related to humidity and radiation, and turbulent processes is studied first to estimate the additional contribution of these processes to dry dynamics. In a second step these processes are stepwise included in the model to increase the complexity and finally estimate the relative importance of each process. The results suggest that including turbulence leads to a weaker TIL than in a dry reference simulation. In contrast, the TIL evolves stronger when radiation is included but the temporal occurrence is still comparable to the reference. Using various cloud schemes in the model shows that latent heat release and consecutive increased vertical motions foster an earlier and stronger appearance of the TIL than in all other life cycles. Furthermore, updrafts moisten the upper troposphere and as such increase the radiative effect from water vapor. Particularly, this process becomes more relevant for maintaining the TIL during later stages of the life cycles. Increased convergence of the vertical wind induced by updrafts and by propagating and potentially dissipating inertia-gravity waves further contributes to the enhanced stability of the lower stratosphere. Furthermore, radiative feedback of ice clouds reaching up to the tropopause is identified to potentially further affect the strength of the TIL in the region of the cloud.
Terry, Rebecca L; Wells, Dominic J
2016-12-01
The muscular dystrophies are a diverse group of degenerative diseases for which many mouse models are available. These models are frequently used to assess potential therapeutic interventions and histological evaluation of multiple muscles is an important part of this assessment. Histological evaluation is especially useful when combined with tests of muscle function. This unit describes a protocol for necropsy, processing, cryosectioning, and histopathological evaluation of murine skeletal muscles, which is applicable to both models of muscular dystrophy and other neuromuscular conditions. Key histopathological features of dystrophic muscle are discussed using the mdx mouse (a model of Duchenne muscular dystrophy) as an example. Optimal handling during dissection, processing and sectioning is vital to avoid artifacts that can confound or prevent future analyses. Muscles carefully processed using this protocol are suitable for further evaluation using immunohistochemistry, immunofluorescence, special histochemical stains, and immuoblotting. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.
Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Cassells, Benny; Sin, Gürkan; Gernaey, Krist V
2017-07-01
A novel model-based control strategy has been developed for filamentous fungal fed-batch fermentation processes. The system of interest is a pilot scale (550 L) filamentous fungus process operating at Novozymes A/S. In such processes, it is desirable to maximize the total product achieved in a batch in a defined process time. In order to achieve this goal, it is important to maximize both the product concentration, and also the total final mass in the fed-batch system. To this end, we describe the development of a control strategy which aims to achieve maximum tank fill, while avoiding oxygen limited conditions. This requires a two stage approach: (i) calculation of the tank start fill; and (ii) on-line control in order to maximize fill subject to oxygen transfer limitations. First, a mechanistic model was applied off-line in order to determine the appropriate start fill for processes with four different sets of process operating conditions for the stirrer speed, headspace pressure, and aeration rate. The start fills were tested with eight pilot scale experiments using a reference process operation. An on-line control strategy was then developed, utilizing the mechanistic model which is recursively updated using on-line measurements. The model was applied in order to predict the current system states, including the biomass concentration, and to simulate the expected future trajectory of the system until a specified end time. In this way, the desired feed rate is updated along the progress of the batch taking into account the oxygen mass transfer conditions and the expected future trajectory of the mass. The final results show that the target fill was achieved to within 5% under the maximum fill when tested using eight pilot scale batches, and over filling was avoided. The results were reproducible, unlike the reference experiments which show over 10% variation in the final tank fill, and this also includes over filling. The variance of the final tank fill is reduced by over 74%, meaning that it is possible to target the final maximum fill reproducibly. The product concentration achieved at a given set of process conditions was unaffected by the control strategy. Biotechnol. Bioeng. 2017;114: 1459-1468. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Glacial isostatic adjustment using GNSS permanent stations and GIA modelling tools
NASA Astrophysics Data System (ADS)
Kollo, Karin; Spada, Giorgio; Vermeer, Martin
2013-04-01
Glacial Isostatic Adjustment (GIA) affects the Earth's mantle in areas which were once ice covered and the process is still ongoing. In this contribution we focus on GIA processes in Fennoscandian and North American uplift regions. In this contribution we use horizontal and vertical uplift rates from Global Navigation Satellite System (GNSS) permanent stations. For Fennoscandia the BIFROST dataset (Lidberg, 2010) and North America the dataset from Sella, 2007 were used respectively. We perform GIA modelling with the SELEN program (Spada and Stocchi, 2007) and we vary ice model parameters in space in order to find ice model which suits best with uplift values obtained from GNSS time series analysis. In the GIA modelling, the ice models ICE-5G (Peltier, 2004) and the ice model denoted as ANU05 ((Fleming and Lambeck, 2004) and references therein) were used. As reference, the velocity field from GNSS permanent station time series was used for both target areas. Firstly the sensitivity to the harmonic degree was tested in order to reduce the computation time. In the test, nominal viscosity values and pre-defined lithosphere thicknesses models were used, varying maximum harmonic degree values. Main criteria for choosing the suitable harmonic degree was chi-square fit - if the error measure does not differ more than 10%, then one might use as well lower harmonic degree value. From this test, maximum harmonic degree of 72 was chosen to perform calculations, as the larger value did not significantly modify the results obtained, as well the computational time for observations was kept reasonable. Secondly the GIA computations were performed to find the model, which could fit with highest probability to the GNSS-based velocity field in the target areas. In order to find best fitting Earth viscosity parameters, different viscosity profiles for the Earth models were tested and their impact on horizontal and vertical velocity rates from GIA modelling was studied. For every tested model the chi-square misfit for horizontal, vertical and three-dimensional velocity rates from the reference model was found (Milne, 2001). Finally, the best fitting models from GIA modelling were compared with rates obtained from GNSS data. Keywords: Fennoscandia, North America, land uplift, glacial isostatic adjustment, visco-elastic modelling, BIFROST. References Lidberg, M., Johannson, J., Scherneck, H.-G. and Milne, G. (2010). Recent results based on continuous GPS observations of the GIA process in Fennoscandia from BIFROST. Journal of Geodynamics, 50. pp. 8-18. Sella, G. F., Stein, S., Dixon, T. H., Craymer, M., James, T. S., Mazotti, S. and Dokka, R. K. (2007). Observations of glacial isostatic adjustment in "stable" North America with GPS. Geophysical Research Letters, 34, L02306. Spada, G., Stocchi, P. (2007). SELEN: A Fortran 90 program for solving the "sea-level equation". Computers & Geosciences, 33:538-562, 2007. Peltier, W. R. (2004). Global glacial isostasy and the surface of the ice-age Earth: The Ice-5G (VM2) model and GRACE. Annu. Rev. Earth Planet. Sci., 32:111-149, 2004. Fleming, K. and Lambeck, K. (2004). Constraints on the Greenland Ice Sheet since the Last Glacial Maximum from sea-level observations and glacial-rebound models. Quaternary Science Reviews 23 (2004), pp. 1053-1077. Milne, G. A. and Davis, J. L. and Mitrovica, J. X. and Scherneck, H.-G. and Johansson, J. M. and Vermeer, M. and Koivula, H. (2001). Space-geodetic constraints on glacial isostatic adjustment in Fennoscandia. Science 291 (2001), pp. 2381-2385.
The Role of Metarepresentation in the Production and Resolution of Referring Expressions.
Horton, William S; Brennan, Susan E
2016-01-01
In this paper we consider the potential role of metarepresentation-the representation of another representation, or as commonly considered within cognitive science, the mental representation of another individual's knowledge and beliefs-in mediating definite reference and common ground in conversation. Using dialogues from a referential communication study in which speakers conversed in succession with two different addressees, we highlight ways in which interlocutors work together to successfully refer to objects, and achieve shared conceptualizations. We briefly review accounts of how such shared conceptualizations could be represented in memory, from simple associations between label and referent, to "triple co-presence" representations that track interlocutors in an episode of referring, to more elaborate metarepresentations that invoke theory of mind, mutual knowledge, or a model of a conversational partner. We consider how some forms of metarepresentation, once created and activated, could account for definite reference in conversation by appealing to ordinary processes in memory. We conclude that any representations that capture information about others' perspectives are likely to be relatively simple and subject to the same kinds of constraints on attention and memory that influence other kinds of cognitive representations.
Space Generic Open Avionics Architecture (SGOAA) reference model technical guide
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1993-01-01
This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
Cancer growth and metastasis as a metaphor of Go gaming: An Ising model approach.
Barradas-Bautista, Didier; Alvarado-Mentado, Matias; Agostino, Mark; Cocho, Germinal
2018-01-01
This work aims for modeling and simulating the metastasis of cancer, via the analogy between the cancer process and the board game Go. In the game of Go, black stones that play first could correspond to a metaphor of the birth, growth, and metastasis of cancer. Moreover, playing white stones on the second turn could correspond the inhibition of cancer invasion. Mathematical modeling and algorithmic simulation of Go may therefore benefit the efforts to deploy therapies to surpass cancer illness by providing insight into the cellular growth and expansion over a tissue area. We use the Ising Hamiltonian, that models the energy exchange in interacting particles, for modeling the cancer dynamics. Parameters in the energy function refer the biochemical elements that induce cancer birth, growth, and metastasis; as well as the biochemical immune system process of defense.
NASA Astrophysics Data System (ADS)
Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.
2017-12-01
The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.
40 CFR 63.487 - Batch front-end process vents-reference control technology.
Code of Federal Regulations, 2010 CFR
2010-07-01
... § 63.487 Batch front-end process vents—reference control technology. (a) Batch front-end process vents... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Batch front-end process vents-reference control technology. 63.487 Section 63.487 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...
A simplified 137Cs transport model for estimating erosion rates in undisturbed soil.
Zhang, Xinbao; Long, Yi; He, Xiubin; Fu, Jiexiong; Zhang, Yunqi
2008-08-01
(137)Cs is an artificial radionuclide with a half-life of 30.12 years which released into the environment as a result of atmospheric testing of thermo-nuclear weapons primarily during the period of 1950s-1970s with the maximum rate of (137)Cs fallout from atmosphere in 1963. (137)Cs fallout is strongly and rapidly adsorbed by fine particles in the surface horizons of the soil, when it falls down on the ground mostly with precipitation. Its subsequent redistribution is associated with movements of the soil or sediment particles. The (137)Cs nuclide tracing technique has been used for assessment of soil losses for both undisturbed and cultivated soils. For undisturbed soils, a simple profile-shape model was developed in 1990 to describe the (137)Cs depth distribution in profile, where the maximum (137)Cs occurs in the surface horizon and it exponentially decreases with depth. The model implied that the total (137)Cs fallout amount deposited on the earth surface in 1963 and the (137)Cs profile shape has not changed with time. The model has been widely used for assessment of soil losses on undisturbed land. However, temporal variations of (137)Cs depth distribution in undisturbed soils after its deposition on the ground due to downward transport processes are not considered in the previous simple profile-shape model. Thus, the soil losses are overestimated by the model. On the base of the erosion assessment model developed by Walling, D.E., He, Q. [1999. Improved models for estimating soil erosion rates from cesium-137 measurements. Journal of Environmental Quality 28, 611-622], we discuss the (137)Cs transport process in the eroded soil profile and make some simplification to the model, develop a method to estimate the soil erosion rate more expediently. To compare the soil erosion rates calculated by the simple profile-shape model and the simple transport model, the soil losses related to different (137)Cs loss proportions of the reference inventory at the Kaixian site of the Three Gorge Region, China are estimated by the two models. The over-estimation of the soil loss by using the previous simple profile-shape model obviously increases with the time period from the sampling year to the year of 1963 and (137)Cs loss proportion of the reference inventory. As to 20-80% of (137)Cs loss proportions of the reference inventory at the Kaixian site in 2004, the annual soil loss depths estimated by the new simplified transport process model are only 57.90-56.24% of the values estimated by the previous model.
Streamflow Bias Correction for Climate Change Impact Studies: Harmless Correction or Wrecking Ball?
NASA Astrophysics Data System (ADS)
Nijssen, B.; Chegwidden, O.
2017-12-01
Projections of the hydrologic impacts of climate change rely on a modeling chain that includes estimates of future greenhouse gas emissions, global climate models, and hydrologic models. The resulting streamflow time series are used in turn as input to impact studies. While these flows can sometimes be used directly in these impact studies, many applications require additional post-processing to remove model errors. Water resources models and regulation studies are a prime example of this type of application. These models rely on specific flows and reservoir levels to trigger reservoir releases and diversions and do not function well if the unregulated streamflow inputs are significantly biased in time and/or amount. This post-processing step is typically referred to as bias-correction, even though this step corrects not just the mean but the entire distribution of flows. Various quantile-mapping approaches have been developed that adjust the modeled flows to match a reference distribution for some historic period. Simulations of future flows are then post-processed using this same mapping to remove hydrologic model errors. These streamflow bias-correction methods have received far less scrutiny than the downscaling and bias-correction methods that are used for climate model output, mostly because they are less widely used. However, some of these methods introduce large artifacts in the resulting flow series, in some cases severely distorting the climate change signal that is present in future flows. In this presentation, we discuss our experience with streamflow bias-correction methods as part of a climate change impact study in the Columbia River basin in the Pacific Northwest region of the United States. To support this discussion, we present a novel way to assess whether a streamflow bias-correction method is merely a harmless correction or is more akin to taking a wrecking ball to the climate change signal.
Nuclear Criticality Safety Data Book
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollenbach, D. F.
The objective of this document is to support the revision of criticality safety process studies (CSPSs) for the Uranium Processing Facility (UPF) at the Y-12 National Security Complex (Y-12). This design analysis and calculation (DAC) document contains development and justification for generic inputs typically used in Nuclear Criticality Safety (NCS) DACs to model both normal and abnormal conditions of processes at UPF to support CSPSs. This will provide consistency between NCS DACs and efficiency in preparation and review of DACs, as frequently used data are provided in one reference source.
nmsBuilder: Freeware to create subject-specific musculoskeletal models for OpenSim.
Valente, Giordano; Crimi, Gianluigi; Vanella, Nicola; Schileo, Enrico; Taddei, Fulvia
2017-12-01
Musculoskeletal modeling and simulations of movement have been increasingly used in orthopedic and neurological scenarios, with increased attention to subject-specific applications. In general, musculoskeletal modeling applications have been facilitated by the development of dedicated software tools; however, subject-specific studies have been limited also by time-consuming modeling workflows and high skilled expertise required. In addition, no reference tools exist to standardize the process of musculoskeletal model creation and make it more efficient. Here we present a freely available software application, nmsBuilder 2.0, to create musculoskeletal models in the file format of OpenSim, a widely-used open-source platform for musculoskeletal modeling and simulation. nmsBuilder 2.0 is the result of a major refactoring of a previous implementation that moved a first step toward an efficient workflow for subject-specific model creation. nmsBuilder includes a graphical user interface that provides access to all functionalities, based on a framework for computer-aided medicine written in C++. The operations implemented can be used in a workflow to create OpenSim musculoskeletal models from 3D surfaces. A first step includes data processing to create supporting objects necessary to create models, e.g. surfaces, anatomical landmarks, reference systems; and a second step includes the creation of OpenSim objects, e.g. bodies, joints, muscles, and the corresponding model. We present a case study using nmsBuilder 2.0: the creation of an MRI-based musculoskeletal model of the lower limb. The model included four rigid bodies, five degrees of freedom and 43 musculotendon actuators, and was created from 3D surfaces of the segmented images of a healthy subject through the modeling workflow implemented in the software application. We have presented nmsBuilder 2.0 for the creation of musculoskeletal OpenSim models from image-based data, and made it freely available via nmsbuilder.org. This application provides an efficient workflow for model creation and helps standardize the process. We hope this would help promote personalized applications in musculoskeletal biomechanics, including larger sample size studies, and might also represent a basis for future developments for specific applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua
2011-07-01
In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Determination of reference ranges for elements in human scalp hair.
Druyan, M E; Bass, D; Puchyr, R; Urek, K; Quig, D; Harmon, E; Marquardt, W
1998-06-01
Expected values, reference ranges, or reference limits are necessary to enable clinicians to apply analytical chemical data in the delivery of health care. Determination of references ranges is not straightforward in terms of either selecting a reference population or performing statistical analysis. In light of logistical, scientific, and economic obstacles, it is understandable that clinical laboratories often combine approaches in developing health associated reference values. A laboratory may choose to: 1. Validate either reference ranges of other laboratories or published data from clinical research or both, through comparison of patients test data. 2. Base the laboratory's reference values on statistical analysis of results from specimens assayed by the clinical reference laboratory itself. 3. Adopt standards or recommendations of regulatory agencies and governmental bodies. 4. Initiate population studies to validate transferred reference ranges or to determine them anew. Effects of external contamination and anecdotal information from clinicians may be considered. The clinical utility of hair analysis is well accepted for some elements. For others, it remains in the realm of clinical investigation. This article elucidates an approach for establishment of reference ranges for elements in human scalp hair. Observed levels of analytes from hair specimens from both our laboratory's total patient population and from a physician-defined healthy American population have been evaluated. Examination of levels of elements often associated with toxicity serves to exemplify the process of determining reference ranges in hair. In addition the approach serves as a model for setting reference ranges for analytes in a variety of matrices.
CERT Resilience Management Model Capability Appraisal Method (CAM) Version 1.1
2011-10-01
the CERT-RMM CAM V1.1 method is that satisfaction of goals can be determined only upon detailed investigation of the extent to which each...achievement of a specific maturity level or the satisfaction of a process area must mean the same thing for different appraised organizations. The...rate the satisfaction of the goals, based on the extent of practice implementation, for the appraisal reference model and organizational scope
Kim, Oh Seok; Newell, Joshua P
2015-10-01
This paper proposes a new land-change model, the Geographic Emission Benchmark (GEB), as an approach to quantify land-cover changes associated with deforestation and forest degradation. The GEB is designed to determine 'baseline' activity data for reference levels. Unlike other models that forecast business-as-usual future deforestation, the GEB internally (1) characterizes 'forest' and 'deforestation' with minimal processing and ground-truthing and (2) identifies 'deforestation hotspots' using open-source spatial methods to estimate regional rates of deforestation. The GEB also characterizes forest degradation and identifies leakage belts. This paper compares the accuracy of GEB with GEOMOD, a popular land-change model used in the UN-REDD (Reducing Emissions from Deforestation and Forest Degradation) Program. Using a case study of the Chinese tropics for comparison, GEB's projection is more accurate than GEOMOD's, as measured by Figure of Merit. Thus, the GEB produces baseline activity data that are moderately accurate for the setting of reference levels.
Liang, Ningjian; Lu, Xiaonan; Hu, Yaxi; Kitts, David D
2016-01-27
The chlorogenic acid isomer profile and antioxidant activity of both green and roasted coffee beans are reported herein using ATR-FTIR spectroscopy combined with chemometric analyses. High-performance liquid chromatography (HPLC) quantified different chlorogenic acid isomer contents for reference, whereas ORAC, ABTS, and DPPH were used to determine the antioxidant activity of the same coffee bean extracts. FTIR spectral data and reference data of 42 coffee bean samples were processed to build optimized PLSR models, and 18 samples were used for external validation of constructed PLSR models. In total, six PLSR models were constructed for six chlorogenic acid isomers to predict content, with three PLSR models constructed to forecast the free radical scavenging activities, obtained using different chemical assays. In conclusion, FTIR spectroscopy, coupled with PLSR, serves as a reliable, nondestructive, and rapid analytical method to quantify chlorogenic acids and to assess different free radical-scavenging capacities in coffee beans.
Verification of ARES transport code system with TAKEDA benchmarks
NASA Astrophysics Data System (ADS)
Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue
2015-10-01
Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.
Requirements for data integration platforms in biomedical research networks: a reference model
Knaup, Petra
2015-01-01
Biomedical research networks need to integrate research data among their members and with external partners. To support such data sharing activities, an adequate information technology infrastructure is necessary. To facilitate the establishment of such an infrastructure, we developed a reference model for the requirements. The reference model consists of five reference goals and 15 reference requirements. Using the Unified Modeling Language, the goals and requirements are set into relation to each other. In addition, all goals and requirements are described textually in tables. This reference model can be used by research networks as a basis for a resource efficient acquisition of their project specific requirements. Furthermore, a concrete instance of the reference model is described for a research network on liver cancer. The reference model is transferred into a requirements model of the specific network. Based on this concrete requirements model, a service-oriented information technology architecture is derived and also described in this paper. PMID:25699205
NASA Technical Reports Server (NTRS)
Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.; Parrish, Keith A.; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.
2004-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal-optical, often referred to as STOP, analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. Temperatures predicted using geometric and thermal math models are mapped to a structural finite element model in order to predict thermally induced deformations. Motions and deformations at optical surfaces are then input to optical models, and optical performance is predicted using either an optical ray trace or a linear optical analysis tool. In addition to baseline performance predictions, a process for performing sensitivity studies to assess modeling uncertainties is described.
Keane, Robert E.; Burgan, Robert E.; Van Wagtendonk, Jan W.
2001-01-01
Fuel maps are essential for computing spatial fire hazard and risk and simulating fire growth and intensity across a landscape. However, fuel mapping is an extremely difficult and complex process requiring expertise in remotely sensed image classification, fire behavior, fuels modeling, ecology, and geographical information systems (GIS). This paper first presents the challenges of mapping fuels: canopy concealment, fuelbed complexity, fuel type diversity, fuel variability, and fuel model generalization. Then, four approaches to mapping fuels are discussed with examples provided from the literature: (1) field reconnaissance; (2) direct mapping methods; (3) indirect mapping methods; and (4) gradient modeling. A fuel mapping method is proposed that uses current remote sensing and image processing technology. Future fuel mapping needs are also discussed which include better field data and fuel models, accurate GIS reference layers, improved satellite imagery, and comprehensive ecosystem models.
Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering
NASA Astrophysics Data System (ADS)
Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.
2016-06-01
This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.
Beyond the rhetoric: what do we mean by a 'model of care'?
Davidson, Patricia; Halcomb, Elizabeth; Hickman, L; Phillips, J; Graham, B
2006-01-01
Contemporary health care systems are constantly challenged to revise traditional methods of health care delivery. These challenges are multifaceted and stem from: (1) novel pharmacological and non-pharmacological treatments; (2) changes in consumer demands and expectations; (3) fiscal and resource constraints; (4) changes in societal demographics in particular the ageing of society; (5) an increasing burden of chronic disease; (6) documentation of limitations in traditional health care delivery; (7) increased emphasis on transparency, accountability, evidence-based practice (EBP) and clinical governance structures; and (8) the increasing cultural diversity of the community. These challenges provoke discussion of potential alternative models of care, with scant reference to defining what constitutes a model of care. This paper aims to define what is meant by the term 'model of care' and document the pragmatic systems and processes necessary to develop, plan, implement and evaluate novel models of care delivery. Searches of electronic databases, the reference lists of published materials, policy documents and the Internet were conducted using key words including 'model*', 'framework*', 'models, theoretical' and 'nursing models, theoretical'. The collated material was then analysed and synthesised into this review. This review determined that in addition to key conceptual and theoretical perspectives, quality improvement theory (eg. collaborative methodology), project management methods and change management theory inform both pragmatic and conceptual elements of a model of care. Crucial elements in changing health care delivery through the development of innovative models of care include the planning, development, implementation, evaluation and assessment of the sustainability of the new model. Regardless of whether change in health care delivery is attempted on a micro basis (eg. ward level) or macro basis (eg. national or state system) in order to achieve sustainable, effective and efficient changes a well-planned, systematic process is essential.
A stress-free model for residual stress assessment using thermoelastic stress analysis
NASA Astrophysics Data System (ADS)
Howell, Geoffrey; Dulieu-Barton, Janice M.; Achintha, Mithila; Robinson, Andrew F.
2015-03-01
Thermoelastic Stress Analysis (TSA) has been proposed as a method of obtaining residual stresses. The results of a preliminary study demonstrated that when Al-2024 plate containing holes that were plastically deformed by cold expansion process to 2% and 4% strain the thermoelastic response in the material around the hole was different to that obtained from a plate that had not experienced any plastic cold expansion (i.e. a reference specimen). This observation provides an opportunity for obtaining residual stresses based on TSA data. In many applications a reference specimen (i.e. residual stress free specimen) may not be available for comparison, so a synthetic, digital bitmap has been proposed as an alternative. An elastic finite element model is created using commercially available software Abaqus/Standard and the resultant stress field is extracted. The simulated stress field from the model is mapped onto a grid that matches the TSA pixel data from a physical reference specimen. This stress field is then converted to a ΔT/T field that can be compared to the full-field TSA data. When the reference experimental data is subtracted from the, bitmap dataset the resultant ΔT/T field is approximately zero. Further work proposes replacing the experimental reference data with that from specimens that have undergone cold expansion with the aim of revealing the regions affected by residual stress through a departure from zero in the resultant stress field. The paper demonstrates the first steps necessary for deriving the residual stresses from a general specimen using TSA.
Supply Chain Engineering and the Use of a Supporting Knowledge Management Application
NASA Astrophysics Data System (ADS)
Laakmann, Frank
The future competition in markets will happen between logistics networks and no longer between enterprises. A new approach for supporting the engineering of logistics networks is developed by this research as a part of the Collaborative Research Centre (SFB) 559: "Modeling of Large Networks in Logistics" at the University of Dortmund together with the Fraunhofer-Institute of Material Flow and Logistics founded by Deutsche Forschungsgemeinschaft (DFG). Based on a reference model for logistics processes, the process chain model, a guideline for logistics engineers is developed to manage the different types of design tasks of logistics networks. The technical background of this solution is a collaborative knowledge management application. This paper will introduce how new Internet-based technologies support supply chain design projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastasia M. Gribik; Ronald E. Mizia; Harry Gatley
This project addresses both the technical and economic feasibility of replacing industrial gas in lime kilns with synthesis gas from the gasification of hog fuel. The technical assessment includes a materials evaluation, processing equipment needs, and suitability of the heat content of the synthesis gas as a replacement for industrial gas. The economic assessment includes estimations for capital, construction, operating, maintenance, and management costs for the reference plant. To perform these assessments, detailed models of the gasification and lime kiln processes were developed using Aspen Plus. The material and energy balance outputs from the Aspen Plus model were used asmore » inputs to both the material and economic evaluations.« less
Integrated Survey Procedures for the Virtual Reading and Fruition of Historical Buildings
NASA Astrophysics Data System (ADS)
Scandurra, S.; Pulcrano, M.; Cirillo, V.; Campi, M.; di Luggo, A.; Zerlenga, O.
2018-05-01
This paper presents the developments of research related to the integration of digital survey methodologies with reference to image-based and range-based technologies. Starting from the processing of point clouds, the data were processed for both the geometric interpretation of the space as well as production of three-dimensional models that describe the constitutive and morphological relationships. The subject of the study was the church of San Carlo all'Arena in Naples (Italy), with a HBIM model being produced that is semantically consistent with the real building. Starting from the data acquired, a visualization system was created for the virtual exploration of the building.
The flight planning - flight management connection
NASA Technical Reports Server (NTRS)
Sorensen, J. A.
1984-01-01
Airborne flight management systems are currently being implemented to minimize direct operating costs when flying over a fixed route between a given city pair. Inherent in the design of these systems is that the horizontal flight path and wind and temperature models be defined and input into the airborne computer before flight. The wind/temperature model and horizontal path are products of the flight planning process. Flight planning consists of generating 3-D reference trajectories through a forecast wind field subject to certain ATC and transport operator constraints. The interrelationships between flight management and flight planning are reviewed, and the steps taken during the flight planning process are summarized.
Modeling of outpatient prescribing process in iran: a gateway toward electronic prescribing system.
Ahmadi, Maryam; Samadbeik, Mahnaz; Sadoughi, Farahnaz
2014-01-01
Implementation of electronic prescribing system can overcome many problems of the paper prescribing system, and provide numerous opportunities of more effective and advantageous prescribing. Successful implementation of such a system requires complete and deep understanding of work content, human force, and workflow of paper prescribing. The current study was designed in order to model the current business process of outpatient prescribing in Iran and clarify different actions during this process. In order to describe the prescribing process and the system features in Iran, the methodology of business process modeling and analysis was used in the present study. The results of the process documentation were analyzed using a conceptual model of workflow elements and the technique of modeling "As-Is" business processes. Analysis of the current (as-is) prescribing process demonstrated that Iran stood at the first levels of sophistication in graduated levels of electronic prescribing, namely electronic prescription reference, and that there were problematic areas including bottlenecks, redundant and duplicated work, concentration of decision nodes, and communicative weaknesses among stakeholders of the process. Using information technology in some activities of medication prescription in Iran has not eliminated the dependence of the stakeholders on paper-based documents and prescriptions. Therefore, it is necessary to implement proper system programming in order to support change management and solve the problems in the existing prescribing process. To this end, a suitable basis should be provided for reorganization and improvement of the prescribing process for the future electronic systems.
Multi input single output model predictive control of non-linear bio-polymerization process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arumugasamy, Senthil Kumar; Ahmad, Z.
This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state spacemore » model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.« less
Reference Models for Structural Technology Assessment and Weight Estimation
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Eldred, Lloyd
2005-01-01
Previously the Exploration Concepts Branch of NASA Langley Research Center has developed techniques for automating the preliminary design level of launch vehicle airframe structural analysis for purposes of enhancing historical regression based mass estimating relationships. This past work was useful and greatly reduced design time, however its application area was very narrow in terms of being able to handle a large variety in structural and vehicle general arrangement alternatives. Implementation of the analysis approach presented herein also incorporates some newly developed computer programs. Loft is a program developed to create analysis meshes and simultaneously define structural element design regions. A simple component defining ASCII file is read by Loft to begin the design process. HSLoad is a Visual Basic implementation of the HyperSizer Application Programming Interface, which automates the structural element design process. Details of these two programs and their use are explained in this paper. A feature which falls naturally out of the above analysis paradigm is the concept of "reference models". The flexibility of the FEA based JAVA processing procedures and associated process control classes coupled with the general utility of Loft and HSLoad make it possible to create generic program template files for analysis of components ranging from something as simple as a stiffened flat panel, to curved panels, fuselage and cryogenic tank components, flight control surfaces, wings, through full air and space vehicle general arrangements.
The new conversion model MODERN to derive erosion rates from inventories of fallout radionuclides
NASA Astrophysics Data System (ADS)
Arata, Laura; Meusburger, Katrin; Frenkel, Elena; A'Campo-Neuen, Annette; Iurian, Andra-Rada; Ketterer, Michael E.; Mabit, Lionel; Alewell, Christine
2016-04-01
The measurement of fallout radionuclides (FRNs) has become one of the most commonly used methods to quantify soil erosion and depositional processes. FRNs include anthropogenic radionuclides (e.g. 137Cs, 239+240Pu) released into the atmosphere during nuclear bomb tests and power plant accidents (e.g Chernobyl, Fukushima-Daiichi), as well as natural radiotracers such as 210Pbex and 7Be. FRNs reach the land surface by dry and wet fallouts from the atmosphere. Once deposited, FRNs are tightly adsorbed by fine soil particles and their subsequent redistribution is mostly associated with soil erosion processes. FRNs methods are based on a qualitative comparison: the inventory (total radionuclide activity per unit area) at a given sampling site is compared to that of a so called reference site. The conversion of FRN inventories into soil erosion and deposition rates is done with a variety of models, which suitability is dependent on the selected FRN, soil cultivation (ploughed or unploughed) and movement (erosion or deposition). The authors propose a new conversion model, which can be easily and comprehensively used for different FRNs, land uses and soil redistribution processes. This new model i.e. MODERN (MOdelling Deposition and Erosion rates with RadioNuclides) considers the precise depth distribution of a given FRN at a reference site, and allows adapting it for any specific site conditions. MODERN adaptability and performance has been tested on two published case studies: (i) a 137Cs study in an alpine and unploughed area in the Aosta valley (Italy) and (ii) a 210Pbex study on a ploughed area located in Romania. The results show a good agreement and a significant correlation (r= 0.91, p<0.0001) between the results of MODERN and the published models currently used by the FRN scientific community (i.e. the Profile Distribution Model and the Mass Balance Model). The open access code and the cost free accessibility of MODERN will ensure the promotion of a wider application of FRNs for investigating soil erosion and sedimentation processes.
Some aspects of the analysis of geodetic strain observations in kinematic models
NASA Astrophysics Data System (ADS)
Welsch, W. M.
1986-11-01
Frequently, deformation processes are analyzed in static models. In many cases, this procedure is justified, in particular if the deformation occurring is a singular event. If. however, the deformation is a continuous process, as is the case, for instance, with recent crustal movements, the analysis in kinematic models is more commensurate with the problem because the factor "time" is considered an essential part of the model. Some specialities have to be considered when analyzing geodetic strain observations in kinematic models. They are dealt with in this paper. After a brief derivation of the basic kinematic model and the kinematic strain model, the following subjects are treated: the adjustment of the pointwise velocity field and the derivation of strain-rate parameters; the fixing of the kinematic reference system as part of the geodetic datum; statistical tests of models by testing linear hypotheses; the invariance of kinematic strain-rate parameters with respect to transformations of the coordinate-system and the geodetic datum; the interpolation of strain rates by finite-element methods. After the representation of some advanced models for the description of secular and episodic kinematic processes, the data analysis in dynamic models is regarded as a further generalization of deformation analysis.
ERIC Educational Resources Information Center
Debska, Agnieszka; Raczaszek-Leonardi, Joanna
2018-01-01
The perspective-adjustment model of language interpretation assumes an initial egocentric stage in comprehension that is only later adjusted to the interlocutor's perspective. Moreover, substantial processing resources are involved in perspective-taking. However, many experiments in the perspective-adjustment framework do not control for visual…
ERIC Educational Resources Information Center
Shaheen, Amer N.
2011-01-01
This research investigated Electronic Service Quality (E-SQ) features that contribute to customer satisfaction in an online environment. The aim was to develop an approach which improves E-CRM processes and enhances online customer satisfaction. The research design adopted mixed methods involving qualitative and quantitative methods to…
Towards Model-Driven End-User Development in CALL
ERIC Educational Resources Information Center
Farmer, Rod; Gruba, Paul
2006-01-01
The purpose of this article is to introduce end-user development (EUD) processes to the CALL software development community. EUD refers to the active participation of end-users, as non-professional developers, in the software development life cycle. Unlike formal software engineering approaches, the focus in EUD on means/ends development is…
ERIC Educational Resources Information Center
Lever, Anne G.; Ridderinkhof, K. Richard; Marsman, Maarten; Geurts, Hilde M.
2017-01-01
As a large heterogeneity is observed across studies on interference control in autism spectrum disorder (ASD), research may benefit from the use of a cognitive framework that models specific processes underlying reactive and proactive control of interference. Reactive control refers to the expression and suppression of responses and proactive…
Processes of Discourse Integration: Evidence from Event-Related Brain Potentials
ERIC Educational Resources Information Center
Ferretti, Todd R.; Singer, Murray; Harwood, Jenna
2013-01-01
We used ERP methodology to investigate how readers validate discourse concepts and update situation models when those concepts followed factive (e.g., knew) and nonfactive (e.g., "guessed") verbs, and also when they were true, false, or indeterminate with reference to previous discourse. Following factive verbs, early (P2) and later brain…
ERIC Educational Resources Information Center
Kultur, Can; Oytun, Erden; Cagiltay, Kursat; Ozden, M. Yasar; Kucuk, Mehmet Emin
2004-01-01
The Shareable Content Object Reference Model (SCORM) aims to standardize electronic course content, its packaging and delivery. Instructional designers and e-learning material producer organizations accept SCORM?s significant impact on instructional design/delivery process, however not much known about how such standards will be implemented to…
USDA-ARS?s Scientific Manuscript database
Pasta is a simple food made from water and durum wheat (Triticum turgidum subsp. durum) semolina. As pasta increases in popularity, studies have endeavored to analyze the attributes that contribute to high quality pasta. Despite being a simple food, the laboratory scale analysis of pasta quality is ...
ERIC Educational Resources Information Center
Thompson, Julia D.; Jesiek, Brent K.
2017-01-01
This paper examines how the structural features of engineering engagement programs (EEPs) are related to the nature of their service-learning partnerships. "Structure" refers to formal and informal models, processes, and operations adopted or used to describe engagement programs, while "nature" signifies the quality of…
Unifying Psychology and Experiential Education: Toward an Integrated Understanding of "Why" It Works
ERIC Educational Resources Information Center
Houge Mackenzie, Susan; Son, Julie S.; Hollenhorst, Steve
2014-01-01
This article examines the significance of psychology to experiential education (EE) and critiques EE models that have developed in isolation from larger psychological theories and developments. Following a review of literature and current issues, select areas of psychology are explored with reference to experiential learning processes. The state…
Teaching Only the Essentials--The Thirty-Minute Stand.
ERIC Educational Resources Information Center
Engeldinger, Eugene A.
1988-01-01
Describes an instructional model for teaching library skills which consists of a 30-minute lecture followed by a 20-minute exercise. Assumptions about learning and the educational process are discussed as well as goal-setting for the class and exercise. It is suggested that this format could be applied to other disciplines. (10 references) (MES)
USDA-ARS?s Scientific Manuscript database
Accurate estimates of daily crop evapotranspiration (ET) are needed for efficient irrigation management, especially in arid and semi-arid irrigated regions where crop water demand exceeds rainfall. The impact of inaccurate ET estimates can be tremendous in both irrigation cost and the increased dema...
Genetic Diseases and Genetic Determinism Models in French Secondary School Biology Textbooks
ERIC Educational Resources Information Center
Castera, Jeremy; Bruguiere, Catherine; Clement, Pierre
2008-01-01
The presentation of genetic diseases in French secondary school biology textbooks is analysed to determine the major conceptions taught in the field of human genetics. References to genetic diseases, and the processes by which they are explained (monogeny, polygeny, chromosomal anomaly and environmental influence) are studied in recent French…
Consolidation of Long-Term Memory: Evidence and Alternatives
ERIC Educational Resources Information Center
Meeter, Martijn; Murre, Jaap M. J.
2004-01-01
Memory loss in retrograde amnesia has long been held to be larger for recent periods than for remote periods, a pattern usually referred to as the Ribot gradient. One explanation for this gradient is consolidation of long-term memories. Several computational models of such a process have shown how consolidation can explain characteristics of…
Model Comparison in Subsurface Science: The DECOVALEX and Sim-SEQ Initiatives (Invited)
NASA Astrophysics Data System (ADS)
Birkholzer, J. T.; Mukhopadhyay, S.; Rutqvist, J.; Tsang, C.
2013-12-01
Building predictive model for flow and transport processes in the subsurface is a challenging task, even more so if these processes are coupled to geomechanical and/or geochemical effects. Modelers must take into consideration a multiplicity of length scales, a wide range of time scales, the coupling between processes, different model components, and the spatial variability in the value of most model input parameters (and often limited knowledge about them). Consequently, modelers have to make choices while developing their conceptual models. Such model choices may cause a wide range in the predictions made by different models and different modeling groups, even if each of the underlying simulators has been perfectly verified against appropriate benchmarks. In other words, the modeling activity itself is prone to uncertainty and bias. This uncertainty, referred to here as model selection uncertainty, forms one of the greatest sources of uncertainty for predictive modeling. In this paper, we discuss two examples of model intercomparison exercises that are currently undertaken to better understand model selection uncertainty, elucidate system behavior, inform needs for data collection and better physics parameterizations, and enhance community understanding of capabilities. The first example is the international DECOVALEX project, which was launched in 1992 by a group of countries dealing with modeling issues related to geologic disposal of radioactive waste. DECOVALEX is an acronym for DEvelopment of COupled THM models and their VALidation against Experiments. To date, the project has progressed successfully through five stages, each of which featuring a small number of test cases for model comparison related to coupled thermo-hydro-mechanical (THM) processes in geologic systems. The test cases are proposed and developed by the organizations participating in DECOVALEX; they typically involve results from major field and laboratory experiments. Over the past decades, the DECOVALEX project has played a major role in improving our understanding of coupled THM processes in fractured rock and buffer/backfill materials, a subject of importance to performance assessment of a radioactive waste geologic repository. The second example is the Sim-SEQ project, a relatively recent model comparison initiative addressing multi-phase processes relevant in geologic carbon sequestration. Like DECOVALEX, Sim-SEQ is not about benchmarking, but rather about evaluating model building efforts in a broad and comprehensive sense. In Sim-SEQ, sixteen international modeling teams are building their own models for a specific carbon sequestration site referred to as the Sim-SEQ Study site (the S-3 site). The S-3 site is patterned after the ongoing SECARB Phase III Early Test site in southwestern Mississippi, where CO2 is injected into a fluvial sandstone unit with high vertical and lateral heterogeneity. The complex geology of the S-3 site, its location in the water leg of a CO2-EOR field with a strong water drive, and the presence of methane in the reservoir brine make this a challenging task, requiring the modelers to use their best judgment in making a large number of choices about how to model various processes and properties of the system.
Stage line diagram: an age-conditional reference diagram for tracking development.
van Buuren, Stef; Ooms, Jeroen C L
2009-05-15
This paper presents a method for calculating stage line diagrams, a novel type of reference diagram useful for tracking developmental processes over time. Potential fields of applications include: dentistry (tooth eruption), oncology (tumor grading, cancer staging), virology (HIV infection and disease staging), psychology (stages of cognitive development), human development (pubertal stages) and chronic diseases (stages of dementia). Transition probabilities between successive stages are modeled as smoothly varying functions of age. Age-conditional references are calculated from the modeled probabilities by the mid-P value. It is possible to eliminate the influence of age by calculating standard deviation scores (SDS). The method is applied to the empirical data to produce reference charts on secondary sexual maturation. The mean of the empirical SDS in the reference population is close to zero, whereas the variance depends on age. The stage line diagram provides quick insight into both status (in SDS) and tempo (in SDS/year) of development of an individual child. Other measures (e.g. height SDS, body mass index SDS) from the same child can be added to the chart. Diagrams for sexual maturation are available as a web application at http://vps.stefvanbuuren.nl/puberty. The stage line diagram expresses status and tempo of discrete changes on a continuous scale. Wider application of these measures scores opens up new analytic possibilities. (c) 2009 John Wiley & Sons, Ltd.
Using incident response trees as a tool for risk management of online financial services.
Gorton, Dan
2014-09-01
The article introduces the use of probabilistic risk assessment for modeling the incident response process of online financial services. The main contribution is the creation of incident response trees, using event tree analysis, which provides us with a visual tool and a systematic way to estimate the probability of a successful incident response process against the currently known risk landscape, making it possible to measure the balance between front-end and back-end security measures. The model is presented using an illustrative example, and is then applied to the incident response process of a Swedish bank. Access to relevant data is verified and the applicability and usability of the proposed model is verified using one year of historical data. Potential advantages and possible shortcomings are discussed, referring to both the design phase and the operational phase, and future work is presented. © 2014 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Pesaresi, Martino; Ouzounis, Georgios K.; Gueguen, Lionel
2012-06-01
A new compact representation of dierential morphological prole (DMP) vector elds is presented. It is referred to as the CSL model and is conceived to radically reduce the dimensionality of the DMP descriptors. The model maps three characteristic parameters, namely scale, saliency and level, into the RGB space through a HSV transform. The result is a a medium abstraction semantic layer used for visual exploration, image information mining and pattern classication. Fused with the PANTEX built-up presence index, the CSL model converges to an approximate building footprint representation layer in which color represents building class labels. This process is demonstrated on the rst high resolution (HR) global human settlement layer (GHSL) computed from multi-modal HR and VHR satellite images. Results of the rst massive processing exercise involving several thousands of scenes around the globe are reported along with validation gures.
Stryjewska, Agnieszka; Kiepura, Katarzyna; Librowski, Tadeusz; Lochyński, Stanisław
2013-01-01
Industrial biotechnology has been defined as the use and application of biotechnology for the sustainable processing and production of chemicals, materials and fuels. It makes use of biocatalysts such as microbial communities, whole-cell microorganisms or purified enzymes. In the review these processes are described. Drug design is an iterative process which begins when a chemist identifies a compound that displays an interesting biological profile and ends when both the activity profile and the chemical synthesis of the new chemical entity are optimized. Traditional approaches to drug discovery rely on a stepwise synthesis and screening program for large numbers of compounds to optimize activity profiles. Over the past ten to twenty years, scientists have used computer models of new chemical entities to help define activity profiles, geometries and relativities. This article introduces inter alia the concepts of molecular modelling and contains references for further reading.
Analyzing Single-Molecule Protein Transportation Experiments via Hierarchical Hidden Markov Models
Chen, Yang; Shen, Kuang
2017-01-01
To maintain proper cellular functions, over 50% of proteins encoded in the genome need to be transported to cellular membranes. The molecular mechanism behind such a process, often referred to as protein targeting, is not well understood. Single-molecule experiments are designed to unveil the detailed mechanisms and reveal the functions of different molecular machineries involved in the process. The experimental data consist of hundreds of stochastic time traces from the fluorescence recordings of the experimental system. We introduce a Bayesian hierarchical model on top of hidden Markov models (HMMs) to analyze these data and use the statistical results to answer the biological questions. In addition to resolving the biological puzzles and delineating the regulating roles of different molecular complexes, our statistical results enable us to propose a more detailed mechanism for the late stages of the protein targeting process. PMID:28943680
NASA Astrophysics Data System (ADS)
Lachowicz, Mirosław
2016-03-01
The very stimulating paper [6] discusses an approach to perception and learning in a large population of living agents. The approach is based on a generalization of kinetic theory methods in which the interactions between agents are described in terms of game theory. Such an approach was already discussed in Ref. [2-4] (see also references therein) in various contexts. The processes of perception and learning are based on the interactions between agents and therefore the general kinetic theory is a suitable tool for modeling them. However the main question that rises is how the perception and learning processes may be treated in the mathematical modeling. How may we precisely deliver suitable mathematical structures that are able to capture various aspects of perception and learning?
Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.
Pearl, Lisa S; Sprouse, Jon
2015-06-01
Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.
Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process
NASA Astrophysics Data System (ADS)
Nakanishi, W.; Fuse, T.; Ishikawa, T.
2015-05-01
This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.
Hwang, Jee-In; Cimino, James J; Bakken, Suzanne
2003-01-01
The purposes of the study were (1) to evaluate the usefulness of the International Standards Organization (ISO) Reference Terminology Model for Nursing Diagnoses as a terminology model for defining nursing diagnostic concepts in the Medical Entities Dictionary (MED) and (2) to create the additional hierarchical structures required for integration of nursing diagnostic concepts into the MED. The authors dissected nursing diagnostic terms from two source terminologies (Home Health Care Classification and the Omaha System) into the semantic categories of the ISO model. Consistent with the ISO model, they selected Focus and Judgment as required semantic categories for creating intensional definitions of nursing diagnostic concepts in the MED. Because the MED does not include Focus and Judgment hierarchies, the authors developed them to define the nursing diagnostic concepts. The ISO model was sufficient for dissecting the source terminologies into atomic terms. The authors identified 162 unique focus concepts from the 266 nursing diagnosis terms for inclusion in the Focus hierarchy. For the Judgment hierarchy, the authors precoordinated Judgment and Potentiality instead of using Potentiality as a qualifier of Judgment as in the ISO model. Impairment and Alteration were the most frequently occurring judgments. Nursing care represents a large proportion of health care activities; thus, it is vital that terms used by nurses are integrated into concept-oriented terminologies that provide broad coverage for the domain of health care. This study supports the utility of the ISO Reference Terminology Model for Nursing Diagnoses as a facilitator for the integration process.
Hwang, Jee-In; Cimino, James J.; Bakken, Suzanne
2003-01-01
Objective: The purposes of the study were (1) to evaluate the usefulness of the International Standards Organization (ISO) Reference Terminology Model for Nursing Diagnoses as a terminology model for defining nursing diagnostic concepts in the Medical Entities Dictionary (MED) and (2) to create the additional hierarchical structures required for integration of nursing diagnostic concepts into the MED. Design and Measurements: The authors dissected nursing diagnostic terms from two source terminologies (Home Health Care Classification and the Omaha System) into the semantic categories of the ISO model. Consistent with the ISO model, they selected Focus and Judgment as required semantic categories for creating intensional definitions of nursing diagnostic concepts in the MED. Because the MED does not include Focus and Judgment hierarchies, the authors developed them to define the nursing diagnostic concepts. Results: The ISO model was sufficient for dissecting the source terminologies into atomic terms. The authors identified 162 unique focus concepts from the 266 nursing diagnosis terms for inclusion in the Focus hierarchy. For the Judgment hierarchy, the authors precoordinated Judgment and Potentiality instead of using Potentiality as a qualifier of Judgment as in the ISO model. Impairment and Alteration were the most frequently occurring judgments. Conclusions: Nursing care represents a large proportion of health care activities; thus, it is vital that terms used by nurses are integrated into concept-oriented terminologies that provide broad coverage for the domain of health care. This study supports the utility of the ISO Reference Terminology Model for Nursing Diagnoses as a facilitator for the integration process. PMID:12668692
Continuum-Kinetic Models and Numerical Methods for Multiphase Applications
NASA Astrophysics Data System (ADS)
Nault, Isaac Michael
This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.
System Engineering Issues for Avionics Survival in the Space Environment
NASA Technical Reports Server (NTRS)
Pavelitz, Steven
1999-01-01
This paper examines how the system engineering process influences the design of a spacecraft's avionics by considering the space environment. Avionics are susceptible to the thermal, radiation, plasma, and meteoroids/orbital debris environments. The environment definitions for various spacecraft mission orbits (LEO/low inclination, LEO/Polar, MEO, HEO, GTO, GEO and High ApogeeElliptical) are discussed. NASA models and commercial software used for environment analysis are reviewed. Applicability of technical references, such as NASA TM-4527 "Natural Orbital Environment Guidelines for Use in Aerospace Vehicle Development" is discussed. System engineering references, such as the MSFC System Engineering Handbook, are reviewed to determine how the environments are accounted for in the system engineering process. Tools and databases to assist the system engineer and avionics designer in addressing space environment effects on avionics are described and usefulness assessed.
A reference model for scientific information interchange
NASA Technical Reports Server (NTRS)
Reich, Lou; Sawyer, Don; Davis, Randy
1993-01-01
This paper presents an overview of an Information Interchange Reference Model (IIRM) currently being developed by individuals participating in the Consultative Committee for Space Data Systems (CCSDS) Panel 2, the Planetary Data Systems (PDS), and the Committee on Earth Observing Satellites (CEOS). This is an ongoing research activity and is not an official position by these bodies. This reference model provides a framework for describing and assessing current and proposed methodologies for information interchange within and among the space agencies. It is hoped that this model will improve interoperability between the various methodologies. As such, this model attempts to address key information interchange issues as seen by the producers and users of space-related data and to put them into a coherent framework. Information is understood as the knowledge (e.g., the scientific content) represented by data. Therefore, concern is not primarily on mechanisms for transferring data from user to user (e.g., compact disk read-only memory (CD-ROM), wide-area networks, optical tape, and so forth) but on how information is encoded as data and how the information content is maintained with minimal loss or distortion during transmittal. The model assumes open systems, which means that the protocols or methods used should be fully described and the descriptions publicly available. Ideally these protocols are promoted by recognized standards organizations using processes that permit involvement by those most likely to be affected, thereby enhancing the protocol's stability and the likelihood of wide support.
Modeling and Characterization of Damage Processes in Metallic Materials
NASA Technical Reports Server (NTRS)
Glaessgen, E. H.; Saether, E.; Smith, S. W.; Hochhalter, J. D.; Yamakov, V. I.; Gupta, V.
2011-01-01
This paper describes a broad effort that is aimed at understanding the fundamental mechanisms of crack growth and using that understanding as a basis for designing materials and enabling predictions of fracture in materials and structures that have small characteristic dimensions. This area of research, herein referred to as Damage Science, emphasizes the length scale regimes of the nanoscale and the microscale for which analysis and characterization tools are being developed to predict the formation, propagation, and interaction of fundamental damage mechanisms. Examination of nanoscale processes requires atomistic and discrete dislocation plasticity simulations, while microscale processes can be examined using strain gradient plasticity, crystal plasticity and microstructure modeling methods. Concurrent and sequential multiscale modeling methods are being developed to analytically bridge between these length scales. Experimental methods for characterization and quantification of near-crack tip damage are also being developed. This paper focuses on several new methodologies in these areas and their application to understanding damage processes in polycrystalline metals. On-going and potential applications are also discussed.
The FoReVer Methodology: A MBSE Framework for Formal Verification
NASA Astrophysics Data System (ADS)
Baracchi, Laura; Mazzini, Silvia; Cimatti, Alessandro; Tonetta, Stefano; Garcia, Gerald
2013-08-01
The need for high level of confidence and operational integrity in critical space (software) systems is well recognized in the Space industry and has been addressed so far through rigorous System and Software Development Processes and stringent Verification and Validation regimes. The Model Based Space System Engineering process (MBSSE) derived in the System and Software Functional Requirement Techniques study (SSFRT) focused on the application of model based engineering technologies to support the space system and software development processes, from mission level requirements to software implementation through model refinements and translations. In this paper we report on our work in the ESA-funded FoReVer project where we aim at developing methodological, theoretical and technological support for a systematic approach to the space avionics system development, in phases 0/A/B/C. FoReVer enriches the MBSSE process with contract-based formal verification of properties, at different stages from system to software, through a step-wise refinement approach, with the support for a Software Reference Architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Varun; Upadhyay, Piyush; Fifield, Leonard S.
The friction stir welding (FSW) is a popular technique to join dissimilar materials in numerous applications. The solid state nature of the process enables joining materials with strikingly different physical properties. For the welds in lap configuration, an enhancement to this technology is made by introducing a short hard insert, referred to as cutting-scribe, at the bottom of the tool pin. The cutting-scribe induces deformation in the bottom plate which leads to the formation of mechanical interlocks or hook like structures at the interface of two materials. A thermo-mechanically coupled computational model employing coupled Eulerian-Lagrangian approach is developed to quantitativelymore » capture the morphology of these interlocks during the FSW process. The simulations using developed model are validated by the experimental observations.The identified interface morphology coupled with the predicted temperature field from this process-structure model can then be used to estimate the post-weld microstructure and joint strength.« less
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Barcza, Zoltán; Marjanović, Hrvoje; Zorana Ostrogović Sever, Maša; Dobor, Laura; Gelybó, Györgyi; Fodor, Nándor; Pintér, Krisztina; Churkina, Galina; Running, Steven; Thornton, Peter; Bellocchi, Gianni; Haszpra, László; Horváth, Ferenc; Suyker, Andrew; Nagy, Zoltán
2016-12-01
The process-based biogeochemical model Biome-BGC was enhanced to improve its ability to simulate carbon, nitrogen, and water cycles of various terrestrial ecosystems under contrasting management activities. Biome-BGC version 4.1.1 was used as a base model. Improvements included addition of new modules such as the multilayer soil module, implementation of processes related to soil moisture and nitrogen balance, soil-moisture-related plant senescence, and phenological development. Vegetation management modules with annually varying options were also implemented to simulate management practices of grasslands (mowing, grazing), croplands (ploughing, fertilizer application, planting, harvesting), and forests (thinning). New carbon and nitrogen pools have been defined to simulate yield and soft stem development of herbaceous ecosystems. The model version containing all developments is referred to as Biome-BGCMuSo (Biome-BGC with multilayer soil module; in this paper, Biome-BGCMuSo v4.0 is documented). Case studies on a managed forest, cropland, and grassland are presented to demonstrate the effect of model developments on the simulation of plant growth as well as on carbon and water balance.
Goce and Its Role in Combined Global High Resolution Gravity Field Determination
NASA Astrophysics Data System (ADS)
Fecher, T.; Pail, R.; Gruber, T.
2013-12-01
Combined high-resolution gravity field models serve as a mandatory basis to describe static and dynamic processes in system Earth. Ocean dynamics can be modeled referring to a high-accurate geoid as reference surface, solid earth processes are initiated by the gravity field. Also geodetic disciplines such as height system determination depend on high-precise gravity field information. To fulfill the various requirements concerning resolution and accuracy, any kind of gravity field information, that means satellite as well as terrestrial and altimetric gravity field observations have to be included in one combination process. A key role is here reserved for GOCE observations, which contribute with its optimal signal content in the long to medium wavelength part and enable a more accurate gravity field determination than ever before especially in areas, where no high-accurate terrestrial gravity field observations are available, such as South America, Asia or Africa. For our contribution we prepare a combined high-resolution gravity field model up to d/o 720 based on full normal equation including recent GOCE, GRACE and terrestrial / altimetric data. For all data sets, normal equations are set up separately, relative weighted to each other in the combination step and solved. This procedure is computationally challenging and can only be performed using super computers. We put special emphasis on the combination process, for which we modified especially our procedure to include GOCE data optimally in the combination. Furthermore we modified our terrestrial/altimetric data sets, what should result in an improved outcome. With our model, in which we included the newest GOCE TIM4 gradiometry results, we can show how GOCE contributes to a combined gravity field solution especially in areas of poor terrestrial data coverage. The model is validated by independent GPS leveling data in selected regions as well as computation of the mean dynamic topography over the oceans. Further, we analyze the statistical error estimates derived from full covariance propagation and compare them with the absolute validation with independent data sets.
Ligand placement based on prior structures: the guided ligand-replacement method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klei, Herbert E.; Bristol-Myers Squibb, Princeton, NJ 08543-4000; Moriarty, Nigel W., E-mail: nwmoriarty@lbl.gov
2014-01-01
A new module, Guided Ligand Replacement (GLR), has been developed in Phenix to increase the ease and success rate of ligand placement when prior protein-ligand complexes are available. The process of iterative structure-based drug design involves the X-ray crystal structure determination of upwards of 100 ligands with the same general scaffold (i.e. chemotype) complexed with very similar, if not identical, protein targets. In conjunction with insights from computational models and assays, this collection of crystal structures is analyzed to improve potency, to achieve better selectivity and to reduce liabilities such as absorption, distribution, metabolism, excretion and toxicology. Current methods formore » modeling ligands into electron-density maps typically do not utilize information on how similar ligands bound in related structures. Even if the electron density is of sufficient quality and resolution to allow de novo placement, the process can take considerable time as the size, complexity and torsional degrees of freedom of the ligands increase. A new module, Guided Ligand Replacement (GLR), was developed in Phenix to increase the ease and success rate of ligand placement when prior protein–ligand complexes are available. At the heart of GLR is an algorithm based on graph theory that associates atoms in the target ligand with analogous atoms in the reference ligand. Based on this correspondence, a set of coordinates is generated for the target ligand. GLR is especially useful in two situations: (i) modeling a series of large, flexible, complicated or macrocyclic ligands in successive structures and (ii) modeling ligands as part of a refinement pipeline that can automatically select a reference structure. Even in those cases for which no reference structure is available, if there are multiple copies of the bound ligand per asymmetric unit GLR offers an efficient way to complete the model after the first ligand has been placed. In all of these applications, GLR leverages prior knowledge from earlier structures to facilitate ligand placement in the current structure.« less
Thermally Cross-Linkable Hole Transport Materials for Solution Processed Phosphorescent OLEDs
NASA Astrophysics Data System (ADS)
Kim, Beom Seok; Kim, Ohyoung; Chin, Byung Doo; Lee, Chil Won
2018-04-01
Materials for unique fabrication of a solution-processed, multi-layered organic light-emitting diode (OLED) were developed. Preparation of a hole transport layer with a thermally cross-linkable chemical structure, which can be processed to form a thin film and then transformed into an insoluble film by using an amine-alcohol condensation reaction with heat treatment, was investigated. Functional groups, such as triplenylamine linked with phenylcarbazole or biphenyl, were employed in the chemical structure of the hole transport layer in order to maintain high triplet energy properties. When phenylcarbazole or biphenyl compounds continuously react with triphenylamine under acid catalysis, a chemically stable thin film material with desirable energy-level properties for a blue OLED could be obtained. The prepared hole transport materials showed excellent surface roughness and thermal stability in comparison with the commercial reference material. On the solution-processed model hole transport layer, we fabricated a device with a blue phosphorescent OLED by using sequential vacuum deposition. The maximum external quantum, 19.3%, was improved by more than 40% over devices with the commercial reference material (11.4%).
Horizontal stress in planetary lithospheres from vertical processes
NASA Technical Reports Server (NTRS)
Banerdt, W. B.
1991-01-01
Understanding the stress states in a lithosphere is of fundamental importance for planetary geophysics. It is closely linked to the processes which form and modify tectonic features on the surface and reflects the behavior of the planet's interior, providing a constraint for the difficult problem of determining interior structure and processes. The tectonics on many extraterrestrial bodies (Moon, Mars, and most of the outer planet satellites) appears to be mostly vertical, and the horizontal stresses induced by vertical motions and loads are expected to dominate the deformation of their lithospheres. Herein, only changes are examined in the state of stress induced by processes such as sedimentary and volcanic deposition, erosional denudation, and changes in the thermal gradient that induce uplift or subsidence. This analysis is important both for evaluating stresses for specific regions in which the vertical stress history can be estimated, as well as for applying the proper loading conditions to global stress models. All references to lithosphere herein should be understood to refer to the elastic lithosphere, that layer which deforms elastically or brittlely when subjected to geologically scaled stresses.
Khuda, Sefat; Slate, Andrew; Pereira, Marion; Al-Taher, Fadwa; Jackson, Lauren; Diaz-Amigo, Carmen; Bigley, Elmer C; Whitaker, Thomas; Williams, Kristina M
2012-05-02
Among the major food allergies, peanut, egg, and milk are the most common. The immunochemical detection of food allergens depends on various factors, such as the food matrix and processing method, which can affect allergen conformation and extractability. This study aimed to (1) develop matrix-specific incurred reference materials for allergen testing, (2) determine whether multiple allergens in the same model food can be simultaneously detected, and (3) establish the effect of processing on reference material stability and allergen detection. Defatted peanut flour, whole egg powder, and spray-dried milk were added to cookie dough at seven incurred levels before baking. Allergens were measured using five commercial enzyme-linked immunosorbent assay (ELISA) kits. All kits showed decreased recovery of all allergens after baking. Analytical coefficients of variation for most kits increased with baking time, but decreased with incurred allergen level. Thus, food processing negatively affects the recovery and variability of peanut, egg, and milk detection in a sugar cookie matrix when using immunochemical methods.
40 CFR 63.487 - Batch front-end process vents-reference control technology.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-reference control technology. 63.487 Section 63.487 Protection of Environment ENVIRONMENTAL PROTECTION... SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.487 Batch front-end process vents—reference control technology. (a) Batch front-end process...
40 CFR 63.487 - Batch front-end process vents-reference control technology.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-reference control technology. 63.487 Section 63.487 Protection of Environment ENVIRONMENTAL PROTECTION... SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.487 Batch front-end process vents—reference control technology. (a) Batch front-end process...
40 CFR 63.487 - Batch front-end process vents-reference control technology.
Code of Federal Regulations, 2014 CFR
2014-07-01
...-reference control technology. 63.487 Section 63.487 Protection of Environment ENVIRONMENTAL PROTECTION... SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.487 Batch front-end process vents—reference control technology. (a) Batch front-end process...
40 CFR 63.113 - Process vent provisions-reference control technology.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 10 2012-07-01 2012-07-01 false Process vent provisions-reference control technology. 63.113 Section 63.113 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... § 63.113 Process vent provisions—reference control technology. (a) The owner or operator of a Group 1...
40 CFR 63.113 - Process vent provisions-reference control technology.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 10 2013-07-01 2013-07-01 false Process vent provisions-reference control technology. 63.113 Section 63.113 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... § 63.113 Process vent provisions—reference control technology. (a) The owner or operator of a Group 1...
40 CFR 63.113 - Process vent provisions-reference control technology.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Process vent provisions-reference control technology. 63.113 Section 63.113 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... § 63.113 Process vent provisions—reference control technology. (a) The owner or operator of a Group 1...
40 CFR 63.113 - Process vent provisions-reference control technology.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 10 2014-07-01 2014-07-01 false Process vent provisions-reference control technology. 63.113 Section 63.113 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... § 63.113 Process vent provisions—reference control technology. (a) The owner or operator of a Group 1...
40 CFR 63.113 - Process vent provisions-reference control technology.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Process vent provisions-reference control technology. 63.113 Section 63.113 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... § 63.113 Process vent provisions—reference control technology. (a) The owner or operator of a Group 1...
NASA Astrophysics Data System (ADS)
Theodorsen, A.; E Garcia, O.; Rypdal, M.
2017-05-01
Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.
Sasakura, D; Nakayama, K; Sakamoto, T; Chikuma, T
2015-05-01
The use of transmission near infrared spectroscopy (TNIRS) is of particular interest in the pharmaceutical industry. This is because TNIRS does not require sample preparation and can analyze several tens of tablet samples in an hour. It has the capability to measure all relevant information from a tablet, while still on the production line. However, TNIRS has a narrow spectrum range and overtone vibrations often overlap. To perform content uniformity testing in tablets by TNIRS, various properties in the tableting process need to be analyzed by a multivariate prediction model, such as a Partial Least Square Regression modeling. One issue is that typical approaches require several hundred reference samples to act as the basis of the method rather than a strategically designed method. This means that many batches are needed to prepare the reference samples; this requires time and is not cost effective. Our group investigated the concentration dependence of the calibration model with a strategic design. Consequently, we developed a more effective approach to the TNIRS calibration model than the existing methodology.
Clinical modeling--a critical analysis.
Blobel, Bernd; Goossen, William; Brochhausen, Mathias
2014-01-01
Modeling clinical processes (and their informational representation) is a prerequisite for optimally enabling and supporting high quality and safe care through information and communication technology and meaningful use of gathered information. The paper investigates existing approaches to clinical modeling, thereby systematically analyzing the underlying principles, the consistency with and the integration opportunity to other existing or emerging projects, as well as the correctness of representing the reality of health and health services. The analysis is performed using an architectural framework for modeling real-world systems. In addition, fundamental work on the representation of facts, relations, and processes in the clinical domain by ontologies is applied, thereby including the integration of advanced methodologies such as translational and system medicine. The paper demonstrates fundamental weaknesses and different maturity as well as evolutionary potential in the approaches considered. It offers a development process starting with the business domain and its ontologies, continuing with the Reference Model-Open Distributed Processing (RM-ODP) related conceptual models in the ICT ontology space, the information and the computational view, and concluding with the implementation details represented as engineering and technology view, respectively. The existing approaches reflect at different levels the clinical domain, put the main focus on different phases of the development process instead of first establishing the real business process representation and therefore enable quite differently and partially limitedly the domain experts' involvement. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Baumann, Stefan; Schumacher, Petra B
2012-09-01
The paper reports on a perception experiment in German that investigated the neuro-cognitive processing of information structural concepts and their prosodic marking using event-related brain potentials (ERPs). Experimental conditions controlled the information status (given vs. new) of referring and non-referring target expressions (nouns vs. adjectives) and were elicited via context sentences, which did not - unlike most previous ERP studies in the field--trigger an explicit focus expectation. Target utterances displayed prosodic realizations of the critical words which differed in accent position and accent type. Electrophysiological results showed an effect of information status, maximally distributed over posterior sites, displaying a biphasic N400--Late Positivity pattern for new information. We claim that this pattern reflects increased processing demands associated with new information, with the N400 indicating enhanced costs from linking information with the previous discourse and the Late Positivity indicating the listener's effort to update his/her discourse model. The prosodic manipulation registered more pronounced effects over anterior regions and revealed an enhanced negativity followed by a Late Positivity for deaccentuation, probably also reflecting costs from discourse linking and updating respectively. The data further lend indirect support for the idea that givenness applies not only to referents but also to non-referential expressions ('lexical givenness').
Health care managers' views on and approaches to implementing models for improving care processes.
Andreasson, Jörgen; Eriksson, Andrea; Dellve, Lotta
2016-03-01
To develop a deeper understanding of health-care managers' views on and approaches to the implementation of models for improving care processes. In health care, there are difficulties in implementing models for improving care processes that have been decided on by upper management. Leadership approaches to this implementation can affect the outcome. In-depth interviews with first- and second-line managers in Swedish hospitals were conducted and analysed using grounded theory. 'Coaching for participation' emerged as a central theme for managers in handling top-down initiated process development. The vertical approach in this coaching addresses how managers attempt to sustain unit integrity through adapting and translating orders from top management. The horizontal approach in the coaching refers to managers' strategies for motivating and engaging their employees in implementation work. Implementation models for improving care processes require a coaching leadership built on close manager-employee interaction, mindfulness regarding the pace of change at the unit level, managers with the competence to share responsibility with their teams and engaged employees with the competence to share responsibility for improving the care processes, and organisational structures that support process-oriented work. Implications for nursing management are the importance of giving nurse managers knowledge of change management. © 2015 John Wiley & Sons Ltd.
A template-based approach for responsibility management in executable business processes
NASA Astrophysics Data System (ADS)
Cabanillas, Cristina; Resinas, Manuel; Ruiz-Cortés, Antonio
2018-05-01
Process-oriented organisations need to manage the different types of responsibilities their employees may have w.r.t. the activities involved in their business processes. Despite several approaches provide support for responsibility modelling, in current Business Process Management Systems (BPMS) the only responsibility considered at runtime is the one related to performing the work required for activity completion. Others like accountability or consultation must be implemented by manually adding activities in the executable process model, which is time-consuming and error-prone. In this paper, we address this limitation by enabling current BPMS to execute processes in which people with different responsibilities interact to complete the activities. We introduce a metamodel based on Responsibility Assignment Matrices (RAM) to model the responsibility assignment for each activity, and a flexible template-based mechanism that automatically transforms such information into BPMN elements, which can be interpreted and executed by a BPMS. Thus, our approach does not enforce any specific behaviour for the different responsibilities but new templates can be modelled to specify the interaction that best suits the activity requirements. Furthermore, libraries of templates can be created and reused in different processes. We provide a reference implementation and build a library of templates for a well-known set of responsibilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuldna, Piret, E-mail: piret.kuldna@seit.ee; Peterson, Kaja; Kuhi-Thalfeldt, Reeli
Strategic Environmental Assessment (SEA) serves as a platform for bringing together researchers, policy developers and other stakeholders to evaluate and communicate significant environmental and socio-economic effects of policies, plans and programmes. Quantitative computer models can facilitate knowledge exchange between various parties that strive to use scientific findings to guide policy-making decisions. The process of facilitating knowledge generation and exchange, i.e. knowledge brokerage, has been increasingly explored, but there is not much evidence in the literature on how knowledge brokerage activities are used in full cycles of SEAs which employ quantitative models. We report on the SEA process of the nationalmore » energy plan with reflections on where and how the Long-range Energy Alternatives Planning (LEAP) model was used for knowledge brokerage on emissions modelling between researchers and policy developers. Our main suggestion is that applying a quantitative model not only in ex ante, but also ex post scenario modelling and associated impact assessment can facilitate systematic and inspiring knowledge exchange process on a policy problem and capacity building of participating actors. - Highlights: • We examine the knowledge brokering on emissions modelling between researchers and policy developers in a full cycle of SEA. • Knowledge exchange process can evolve at any modelling stage within SEA. • Ex post scenario modelling enables systematic knowledge exchange and learning on a policy problem.« less
A mathematical model for foreign body reactions in 2D.
Su, Jianzhong; Gonzales, Humberto Perez; Todorov, Michail; Kojouharov, Hristo; Tang, Liping
2011-02-01
The foreign body reactions are commonly referred to the network of immune and inflammatory reactions of human or animals to foreign objects placed in tissues. They are basic biological processes, and are also highly relevant to bioengineering applications in implants, as fibrotic tissue formations surrounding medical implants have been found to substantially reduce the effectiveness of devices. Despite of intensive research on determining the mechanisms governing such complex responses, few mechanistic mathematical models have been developed to study such foreign body reactions. This study focuses on a kinetics-based predictive tool in order to analyze outcomes of multiple interactive complex reactions of various cells/proteins and biochemical processes and to understand transient behavior during the entire period (up to several months). A computational model in two spatial dimensions is constructed to investigate the time dynamics as well as spatial variation of foreign body reaction kinetics. The simulation results have been consistent with experimental data and the model can facilitate quantitative insights for study of foreign body reaction process in general.
Crowdsourcing Based 3d Modeling
NASA Astrophysics Data System (ADS)
Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.
2016-06-01
Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.
Cancer growth and metastasis as a metaphor of Go gaming: An Ising model approach
Barradas-Bautista, Didier; Agostino, Mark; Cocho, Germinal
2018-01-01
This work aims for modeling and simulating the metastasis of cancer, via the analogy between the cancer process and the board game Go. In the game of Go, black stones that play first could correspond to a metaphor of the birth, growth, and metastasis of cancer. Moreover, playing white stones on the second turn could correspond the inhibition of cancer invasion. Mathematical modeling and algorithmic simulation of Go may therefore benefit the efforts to deploy therapies to surpass cancer illness by providing insight into the cellular growth and expansion over a tissue area. We use the Ising Hamiltonian, that models the energy exchange in interacting particles, for modeling the cancer dynamics. Parameters in the energy function refer the biochemical elements that induce cancer birth, growth, and metastasis; as well as the biochemical immune system process of defense. PMID:29718932
Fish tracking by combining motion based segmentation and particle filtering
NASA Astrophysics Data System (ADS)
Bichot, E.; Mascarilla, L.; Courtellemont, P.
2006-01-01
In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.
Kirk, David G; Palonen, Eveliina; Korkeala, Hannu; Lindström, Miia
2014-04-01
Heat-resistant spores of Clostridium botulinum can withstand the pasteurization processes in modern food processing. This poses a risk to food safety as spores may germinate into botulinum neurotoxin-producing vegetative cells. Sporulation in Bacillus subtilis, the model organism for sporulation, is regulated by the transcription factor Spo0A and four alternative sigma factors, SigF, SigE, SigG, and SigK. While the corresponding regulators are found in available genomes of C. botulinum, little is known about their expression. To accurately measure the expression of these genes using quantitative reverse-transcriptase PCR (RT-qPCR) during the exponential and stationary growth phases, a suitable normalization reference gene is required. 16S rrn, adK, alaS, era, gluD, gyrA, rpoC, and rpsJ were selected as the candidate reference genes. The most stable candidate reference gene was 16S ribosomal RNA gene (rrn), based on its low coefficient of variation (1.81%) measured during the 18-h study time. Using 16S rrn as the normalization reference gene, the relative expression levels of spo0A, sigF, sigE, sigG, and sigK were measured over 18h. The pattern of expression showed spo0A expression during the logarithmic growth phase, followed by a drop in expression upon entry to the stationary phase. Expression levels of sigF, sigE, and sigG peaked simultaneously at the end of the exponential growth phase. Peak expression of sigK occurred at 18h, however low levels of expression were detected during the exponential phase. These findings suggest these sigma factors play a role in C. botulinum sporulation that is similar, but not equal, to their role in the B. subtilis model. Copyright © 2013 Elsevier Ltd. All rights reserved.
User's guide to resin infusion simulation program in the FORTRAN language
NASA Technical Reports Server (NTRS)
Weideman, Mark H.; Hammond, Vince H.; Loos, Alfred C.
1992-01-01
RTMCL is a user friendly computer code which simulates the manufacture of fabric composites by the resin infusion process. The computer code is based on the process simulation model described in reference 1. Included in the user's guide is a detailed step by step description of how to run the program and enter and modify the input data set. Sample input and output files are included along with an explanation of the results. Finally, a complete listing of the program is provided.
NASA Astrophysics Data System (ADS)
Buryan, Yu. A.; Babichev, D. O.; Silkov, M. V.; Shtripling, L. O.; Kalashnikov, B. A.
2017-08-01
This research refers to the problems of processing equipment protection from vibration influence. The theory issues of vibration isolation for vibroactive objects such as engines, pumps, compressors, fans, piping, etc. are considered. The design of the perspective air spring with the parallel mounted mechanical inertial motion converter is offered. The mathematical model of the suspension, allowing selecting options to reduce the factor of the force transmission to the base in a certain frequency range is obtained.
NASA Astrophysics Data System (ADS)
Mantegna, Rosario N.; Stanley, H. Eugene
2007-08-01
Preface; 1. Introduction; 2. Efficient market hypothesis; 3. Random walk; 4. Lévy stochastic processes and limit theorems; 5. Scales in financial data; 6. Stationarity and time correlation; 7. Time correlation in financial time series; 8. Stochastic models of price dynamics; 9. Scaling and its breakdown; 10. ARCH and GARCH processes; 11. Financial markets and turbulence; 12. Correlation and anti-correlation between stocks; 13. Taxonomy of a stock portfolio; 14. Options in idealized markets; 15. Options in real markets; Appendix A: notation guide; Appendix B: martingales; References; Index.
Developing Land Use Land Cover Maps for the Lower Mekong Basin to Aid SWAT Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Spruce, J.; Bolten, J. D.; Srinivasan, R.
2017-12-01
This presentation discusses research to develop Land Use Land Cover (LULC) maps for the Lower Mekong Basin (LMB). Funded by a NASA ROSES Disasters grant, the main objective was to produce updated LULC maps to aid the Mekong River Commission's (MRC's) Soil and Water Assessment Tool (SWAT) hydrologic model. In producing needed LULC maps, temporally processed MODIS monthly NDVI data for 2010 were used as the primary data source for classifying regionally prominent forest and agricultural types. The MODIS NDVI data was derived from processing MOD09 and MYD09 8-day reflectance data with the Time Series Product Tool, a custom software package. Circa 2010 Landsat multispectral data from the dry season were processed into top of atmosphere reflectance mosaics and then classified to derive certain locally common LULC types, such as urban areas and industrial forest plantations. Unsupervised ISODATA clustering was used to derive most LULC classifications. GIS techniques were used to merge MODIS and Landsat classifications into final LULC maps for Sub-Basins (SBs) 1-8 of the LMB. The final LULC maps were produced at 250-meter resolution and delivered to the MRC for use in SWAT modeling for the LMB. A map accuracy assessment was performed for the SB 7 LULC map with 14 classes. This assessment was performed by comparing random locations for sampled LULC types to geospatial reference data such as Landsat RGBs, MODIS NDVI phenologic profiles, high resolution satellite data from Google Map/Earth, and other reference data from the MRC (e.g., crop calendars). LULC accuracy assessment results for SB 7 indicated an overall agreement to reference data of 81% at full scheme specificity. However, by grouping 3 deciduous forest classes into 1 class, the overall agreement improved to 87%. The project enabled updated LULC maps, plus more specific rice types were classified compared to the previous LULC maps. The LULC maps from this project should improve the use of SWAT for modeling hydrology in the LMB, plus improve water and disaster management in a region vulnerable to flooding, droughts, and anthropogenic change (e.g., from dam building and other LULC change).