Sample records for task analytic models

  1. Mixed Initiative Visual Analytics Using Task-Driven Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kristin A.; Cramer, Nicholas O.; Israel, David

    2015-12-07

    Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support tasks involved in discovery and sensemaking, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems, at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with such analytic models, such as inferring data models from user interactions to steer the underlying modelsmore » of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Researchers studying the sensemaking process have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present a candidate set of design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences on user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.« less

  2. Task Analytic Models to Guide Analysis and Design: Use of the Operator Function Model to Represent Pilot-Autoflight System Mode Problems

    NASA Technical Reports Server (NTRS)

    Degani, Asaf; Mitchell, Christine M.; Chappell, Alan R.; Shafto, Mike (Technical Monitor)

    1995-01-01

    Task-analytic models structure essential information about operator interaction with complex systems, in this case pilot interaction with the autoflight system. Such models serve two purposes: (1) they allow researchers and practitioners to understand pilots' actions; and (2) they provide a compact, computational representation needed to design 'intelligent' aids, e.g., displays, assistants, and training systems. This paper demonstrates the use of the operator function model to trace the process of mode engagements while a pilot is controlling an aircraft via the, autoflight system. The operator function model is a normative and nondeterministic model of how a well-trained, well-motivated operator manages multiple concurrent activities for effective real-time control. For each function, the model links the pilot's actions with the required information. Using the operator function model, this paper describes several mode engagement scenarios. These scenarios were observed and documented during a field study that focused on mode engagements and mode transitions during normal line operations. Data including time, ATC clearances, altitude, system states, and active modes and sub-modes, engagement of modes, were recorded during sixty-six flights. Using these data, seven prototypical mode engagement scenarios were extracted. One scenario details the decision of the crew to disengage a fully automatic mode in favor of a semi-automatic mode, and the consequences of this action. Another describes a mode error involving updating aircraft speed following the engagement of a speed submode. Other scenarios detail mode confusion at various phases of the flight. This analysis uses the operator function model to identify three aspects of mode engagement: (1) the progress of pilot-aircraft-autoflight system interaction; (2) control/display information required to perform mode management activities; and (3) the potential cause(s) of mode confusion. The goal of this paper is twofold

  3. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  4. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  5. Lessons Learned from Deploying an Analytical Task Management Database

    NASA Technical Reports Server (NTRS)

    O'Neil, Daniel A.; Welch, Clara; Arceneaux, Joshua; Bulgatz, Dennis; Hunt, Mitch; Young, Stephen

    2007-01-01

    Defining requirements, missions, technologies, and concepts for space exploration involves multiple levels of organizations, teams of people with complementary skills, and analytical models and simulations. Analytical activities range from filling a To-Be-Determined (TBD) in a requirement to creating animations and simulations of exploration missions. In a program as large as returning to the Moon, there are hundreds of simultaneous analysis activities. A way to manage and integrate efforts of this magnitude is to deploy a centralized database that provides the capability to define tasks, identify resources, describe products, schedule deliveries, and generate a variety of reports. This paper describes a web-accessible task management system and explains the lessons learned during the development and deployment of the database. Through the database, managers and team leaders can define tasks, establish review schedules, assign teams, link tasks to specific requirements, identify products, and link the task data records to external repositories that contain the products. Data filters and spreadsheet export utilities provide a powerful capability to create custom reports. Import utilities provide a means to populate the database from previously filled form files. Within a four month period, a small team analyzed requirements, developed a prototype, conducted multiple system demonstrations, and deployed a working system supporting hundreds of users across the aeros pace community. Open-source technologies and agile software development techniques, applied by a skilled team enabled this impressive achievement. Topics in the paper cover the web application technologies, agile software development, an overview of the system's functions and features, dealing with increasing scope, and deploying new versions of the system.

  6. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  7. Automatic-heuristic and executive-analytic processing during reasoning: Chronometric and dual-task considerations.

    PubMed

    De Neys, Wim

    2006-06-01

    Human reasoning has been shown to overly rely on intuitive, heuristic processing instead of a more demanding analytic inference process. Four experiments tested the central claim of current dual-process theories that analytic operations involve time-consuming executive processing whereas the heuristic system would operate automatically. Participants solved conjunction fallacy problems and indicative and deontic selection tasks. Experiment 1 established that making correct analytic inferences demanded more processing time than did making heuristic inferences. Experiment 2 showed that burdening the executive resources with an attention-demanding secondary task decreased correct, analytic responding and boosted the rate of conjunction fallacies and indicative matching card selections. Results were replicated in Experiments 3 and 4 with a different secondary-task procedure. Involvement of executive resources for the deontic selection task was less clear. Findings validate basic processing assumptions of the dual-process framework and complete the correlational research programme of K. E. Stanovich and R. F. West (2000).

  8. Student Writing Accepted as High-Quality Responses to Analytic Text-Based Writing Tasks

    ERIC Educational Resources Information Center

    Wang, Elaine; Matsumura, Lindsay Clare; Correnti, Richard

    2018-01-01

    Literacy standards increasingly emphasize the importance of analytic text-based writing. Little consensus exists, however, around what high-quality student responses should look like in this genre. In this study, we investigated fifth-grade students' writing in response to analytic text-based writing tasks (15 teachers, 44 writing tasks, 88 pieces…

  9. Analytic cognitive style, not delusional ideation, predicts data gathering in a large beads task study.

    PubMed

    Ross, Robert M; Pennycook, Gordon; McKay, Ryan; Gervais, Will M; Langdon, Robyn; Coltheart, Max

    2016-07-01

    It has been proposed that deluded and delusion-prone individuals gather less evidence before forming beliefs than those who are not deluded or delusion-prone. The primary source of evidence for this "jumping to conclusions" (JTC) bias is provided by research that utilises the "beads task" data-gathering paradigm. However, the cognitive mechanisms subserving data gathering in this task are poorly understood. In the largest published beads task study to date (n = 558), we examined data gathering in the context of influential dual-process theories of reasoning. Analytic cognitive style (the willingness or disposition to critically evaluate outputs from intuitive processing and engage in effortful analytic processing) predicted data gathering in a non-clinical sample, but delusional ideation did not. The relationship between data gathering and analytic cognitive style suggests that dual-process theories of reasoning can contribute to our understanding of the beads task. It is not clear why delusional ideation was not found to be associated with data gathering or analytic cognitive style.

  10. fMRI activation patterns in an analytic reasoning task: consistency with EEG source localization

    NASA Astrophysics Data System (ADS)

    Li, Bian; Vasanta, Kalyana C.; O'Boyle, Michael; Baker, Mary C.; Nutter, Brian; Mitra, Sunanda

    2010-03-01

    Functional magnetic resonance imaging (fMRI) is used to model brain activation patterns associated with various perceptual and cognitive processes as reflected by the hemodynamic (BOLD) response. While many sensory and motor tasks are associated with relatively simple activation patterns in localized regions, higher-order cognitive tasks may produce activity in many different brain areas involving complex neural circuitry. We applied a recently proposed probabilistic independent component analysis technique (PICA) to determine the true dimensionality of the fMRI data and used EEG localization to identify the common activated patterns (mapped as Brodmann areas) associated with a complex cognitive task like analytic reasoning. Our preliminary study suggests that a hybrid GLM/PICA analysis may reveal additional regions of activation (beyond simple GLM) that are consistent with electroencephalography (EEG) source localization patterns.

  11. Introduction to the IWA task group on biofilm modeling.

    PubMed

    Noguera, D R; Morgenroth, E

    2004-01-01

    An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.

  12. Analytical reasoning task reveals limits of social learning in networks

    PubMed Central

    Rahwan, Iyad; Krasnoshtan, Dmytro; Shariff, Azim; Bonnefon, Jean-François

    2014-01-01

    Social learning—by observing and copying others—is a highly successful cultural mechanism for adaptation, outperforming individual information acquisition and experience. Here, we investigate social learning in the context of the uniquely human capacity for reflective, analytical reasoning. A hallmark of the human mind is its ability to engage analytical reasoning, and suppress false associative intuitions. Through a set of laboratory-based network experiments, we find that social learning fails to propagate this cognitive strategy. When people make false intuitive conclusions and are exposed to the analytic output of their peers, they recognize and adopt this correct output. But they fail to engage analytical reasoning in similar subsequent tasks. Thus, humans exhibit an ‘unreflective copying bias’, which limits their social learning to the output, rather than the process, of their peers’ reasoning—even when doing so requires minimal effort and no technical skill. In contrast to much recent work on observation-based social learning, which emphasizes the propagation of successful behaviour through copying, our findings identify a limit on the power of social networks in situations that require analytical reasoning. PMID:24501275

  13. Analytical reasoning task reveals limits of social learning in networks.

    PubMed

    Rahwan, Iyad; Krasnoshtan, Dmytro; Shariff, Azim; Bonnefon, Jean-François

    2014-04-06

    Social learning-by observing and copying others-is a highly successful cultural mechanism for adaptation, outperforming individual information acquisition and experience. Here, we investigate social learning in the context of the uniquely human capacity for reflective, analytical reasoning. A hallmark of the human mind is its ability to engage analytical reasoning, and suppress false associative intuitions. Through a set of laboratory-based network experiments, we find that social learning fails to propagate this cognitive strategy. When people make false intuitive conclusions and are exposed to the analytic output of their peers, they recognize and adopt this correct output. But they fail to engage analytical reasoning in similar subsequent tasks. Thus, humans exhibit an 'unreflective copying bias', which limits their social learning to the output, rather than the process, of their peers' reasoning-even when doing so requires minimal effort and no technical skill. In contrast to much recent work on observation-based social learning, which emphasizes the propagation of successful behaviour through copying, our findings identify a limit on the power of social networks in situations that require analytical reasoning.

  14. Building analytical three-field cosmological models

    NASA Astrophysics Data System (ADS)

    Santos, J. R. L.; Moraes, P. H. R. S.; Ferreira, D. A.; Neta, D. C. Vilar

    2018-02-01

    A difficult task to deal with is the analytical treatment of models composed of three real scalar fields, as their equations of motion are in general coupled and hard to integrate. In order to overcome this problem we introduce a methodology to construct three-field models based on the so-called "extension method". The fundamental idea of the procedure is to combine three one-field systems in a non-trivial way, to construct an effective three scalar field model. An interesting scenario where the method can be implemented is with inflationary models, where the Einstein-Hilbert Lagrangian is coupled with the scalar field Lagrangian. We exemplify how a new model constructed from our method can lead to non-trivial behaviors for cosmological parameters.

  15. Analytic and subjective assessments of operator workload imposed by communications tasks in transport aircraft

    NASA Technical Reports Server (NTRS)

    Eckel, J. S.; Crabtree, M. S.

    1984-01-01

    Analytical and subjective techniques that are sensitive to the information transmission and processing requirements of individual communications-related tasks are used to assess workload imposed on the aircrew by A-10 communications requirements for civilian transport category aircraft. Communications-related tasks are defined to consist of the verbal exchanges between crews and controllers. Three workload estimating techniques are proposed. The first, an information theoretic analysis, is used to calculate bit values for perceptual, manual, and verbal demands in each communication task. The second, a paired-comparisons technique, obtains subjective estimates of the information processing and memory requirements for specific messages. By combining the results of the first two techniques, a hybrid analytical scale is created. The third, a subjective rank ordering of sequences of communications tasks, provides an overall scaling of communications workload. Recommendations for future research include an examination of communications-induced workload among the air crew and the development of simulation scenarios.

  16. Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering.

    PubMed

    Endert, A; Fiaux, P; North, C

    2012-12-01

    Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.

  17. A Task Analytic Process to Define Future Concepts in Aviation

    NASA Technical Reports Server (NTRS)

    Gore, Brian Francis; Wolter, Cynthia A.

    2014-01-01

    A necessary step when developing next generation systems is to understand the tasks that operators will perform. One NextGen concept under evaluation termed Single Pilot Operations (SPO) is designed to improve the efficiency of airline operations. One SPO concept includes a Pilot on Board (PoB), a Ground Station Operator (GSO), and automation. A number of procedural changes are likely to result when such changes in roles and responsibilities are undertaken. Automation is expected to relieve the PoB and GSO of some tasks (e.g. radio frequency changes, loading expected arrival information). A major difference in the SPO environment is the shift to communication-cued crosschecks (verbal / automated) rather than movement-cued crosschecks that occur in a shared cockpit. The current article highlights a task analytic process of the roles and responsibilities between a PoB, an approach-phase GSO, and automation.

  18. Use of evidence in a categorization task: analytic and holistic processing modes.

    PubMed

    Greco, Alberto; Moretti, Stefania

    2017-11-01

    Category learning performance can be influenced by many contextual factors, but the effects of these factors are not the same for all learners. The present study suggests that these differences can be due to the different ways evidence is used, according to two main basic modalities of processing information, analytically or holistically. In order to test the impact of the information provided, an inductive rule-based task was designed, in which feature salience and comparison informativeness between examples of two categories were manipulated during the learning phases, by introducing and progressively reducing some perceptual biases. To gather data on processing modalities, we devised the Active Feature Composition task, a production task that does not require classifying new items but reproducing them by combining features. At the end, an explicit rating task was performed, which entailed assessing the accuracy of a set of possible categorization rules. A combined analysis of the data collected with these two different tests enabled profiling participants in regard to the kind of processing modality, the structure of representations and the quality of categorial judgments. Results showed that despite the fact that the information provided was the same for all participants, those who adopted analytic processing better exploited evidence and performed more accurately, whereas with holistic processing categorization is perfectly possible but inaccurate. Finally, the cognitive implications of the proposed procedure, with regard to involved processes and representations, are discussed.

  19. A workflow learning model to improve geovisual analytics utility

    PubMed Central

    Roth, Robert E; MacEachren, Alan M; McCabe, Craig A

    2011-01-01

    Introduction This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. Objectives The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. Methodology The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on the

  20. A workflow learning model to improve geovisual analytics utility.

    PubMed

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  1. Digital forensics: an analytical crime scene procedure model (ACSPM).

    PubMed

    Bulbul, Halil Ibrahim; Yavuzcan, H Guclu; Ozel, Mesut

    2013-12-10

    In order to ensure that digital evidence is collected, preserved, examined, or transferred in a manner safeguarding the accuracy and reliability of the evidence, law enforcement and digital forensic units must establish and maintain an effective quality assurance system. The very first part of this system is standard operating procedures (SOP's) and/or models, conforming chain of custody requirements, those rely on digital forensics "process-phase-procedure-task-subtask" sequence. An acceptable and thorough Digital Forensics (DF) process depends on the sequential DF phases, and each phase depends on sequential DF procedures, respectively each procedure depends on tasks and subtasks. There are numerous amounts of DF Process Models that define DF phases in the literature, but no DF model that defines the phase-based sequential procedures for crime scene identified. An analytical crime scene procedure model (ACSPM) that we suggest in this paper is supposed to fill in this gap. The proposed analytical procedure model for digital investigations at a crime scene is developed and defined for crime scene practitioners; with main focus on crime scene digital forensic procedures, other than that of whole digital investigation process and phases that ends up in a court. When reviewing the relevant literature and interrogating with the law enforcement agencies, only device based charts specific to a particular device and/or more general perspective approaches to digital evidence management models from crime scene to courts are found. After analyzing the needs of law enforcement organizations and realizing the absence of crime scene digital investigation procedure model for crime scene activities we decided to inspect the relevant literature in an analytical way. The outcome of this inspection is our suggested model explained here, which is supposed to provide guidance for thorough and secure implementation of digital forensic procedures at a crime scene. In digital forensic

  2. PARAMO: A Parallel Predictive Modeling Platform for Healthcare Analytic Research using Electronic Health Records

    PubMed Central

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng

    2014-01-01

    Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate

  3. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    PubMed

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  4. An Analytical Comparison of the Fidelity of "Large Motion" Versus "Small Motion" Flight Simulators in a Rotorcraft Side-Step Task

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1999-01-01

    This paper presents an analytical and experimental methodology for studying flight simulator fidelity. The task was a rotorcraft bob-up/down maneuver in which vertical acceleration constituted the motion cue. The task considered here is aside-step maneuver that differs from the bob-up one important way: both roll and lateral acceleration cues are available to the pilot. It has been communicated to the author that in some Verticle Motion Simulator (VMS) studies, the lateral acceleration cue has been found to be the most important. It is of some interest to hypothesize how this motion cue associated with "outer-loop" lateral translation fits into the modeling procedure where only "inner-loop " motion cues were considered. This Note is an attempt at formulating such an hypothesis and analytically comparing a large-motion simulator, e.g., the VMS, with a small-motion simulator, e.g., a hexapod.

  5. Complexity Measures, Task Type, and Analytic Evaluations of Speaking Proficiency in a School-Based Assessment Context

    ERIC Educational Resources Information Center

    Gan, Zhengdong

    2012-01-01

    This study, which is part of a large-scale study of using objective measures to validate assessment rating scales and assessment tasks in a high-profile school-based assessment initiative in Hong Kong, examined how grammatical complexity measures relate to task type and analytic evaluations of students' speaking proficiency in a classroom-based…

  6. Event-related potentials during individual, cooperative, and competitive task performance differ in subjects with analytic vs. holistic thinking.

    PubMed

    Apanovich, V V; Bezdenezhnykh, B N; Sams, M; Jääskeläinen, I P; Alexandrov, YuI

    2018-01-01

    It has been presented that Western cultures (USA, Western Europe) are mostly characterized by competitive forms of social interaction, whereas Eastern cultures (Japan, China, Russia) are mostly characterized by cooperative forms. It has also been stated that thinking in Eastern countries is predominantly holistic and in Western countries analytic. Based on this, we hypothesized that subjects with analytic vs. holistic thinking styles show differences in decision making in different types of social interaction conditions. We investigated behavioural and brain-activity differences between subjects with analytic and holistic thinking during a choice reaction time (ChRT) task, wherein the subjects either cooperated, competed (in pairs), or performed the task without interaction with other participants. Healthy Russian subjects (N=78) were divided into two groups based on having analytic or holistic thinking as determined with an established questionnaire. We measured reaction times as well as event-related brain potentials. There were significant differences between the interaction conditions in task performance between subjects with analytic and holistic thinking. Both behavioral performance and physiological measures exhibited higher variance in holistic than in analytic subjects. Differences in amplitude and P300 latency suggest that decision making was easier for the holistic subjects in the cooperation condition, in contrast to analytic subjects for whom decision making based on these measures seemed to be easier in the competition condition. The P300 amplitude was higher in the individual condition as compared with the collective conditions. Overall, our results support the notion that the brains of analytic and holistic subjects work differently in different types of social interaction conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Developing Analytic Rating Guides for "TOEFL iBT"® Integrated Speaking Tasks. "TOEFL iBT"® Research Report, TOEFL iBT-20. ETS Research Report. RR-13-13

    ERIC Educational Resources Information Center

    Jamieson, Joan; Poonpon, Kornwipa

    2013-01-01

    Research and development of a new type of scoring rubric for the integrated speaking tasks of "TOEFL iBT"® are described. These "analytic rating guides" could be helpful if tasks modeled after those in TOEFL iBT were used for formative assessment, a purpose which is different from TOEFL iBT's primary use for admission…

  8. Human performance modeling for system of systems analytics: combat performance-shaping factors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawton, Craig R.; Miller, Dwight Peter

    The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate theymore » would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.« less

  9. Prediction of pilot opinion ratings using an optimal pilot model. [of aircraft handling qualities in multiaxis tasks

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1977-01-01

    A brief review of some of the more pertinent applications of analytical pilot models to the prediction of aircraft handling qualities is undertaken. The relative ease with which multiloop piloting tasks can be modeled via the optimal control formulation makes the use of optimal pilot models particularly attractive for handling qualities research. To this end, a rating hypothesis is introduced which relates the numerical pilot opinion rating assigned to a particular vehicle and task to the numerical value of the index of performance resulting from an optimal pilot modeling procedure as applied to that vehicle and task. This hypothesis is tested using data from piloted simulations and is shown to be reasonable. An example concerning a helicopter landing approach is introduced to outline the predictive capability of the rating hypothesis in multiaxis piloting tasks.

  10. Verification of Decision-Analytic Models for Health Economic Evaluations: An Overview.

    PubMed

    Dasbach, Erik J; Elbasha, Elamin H

    2017-07-01

    Decision-analytic models for cost-effectiveness analysis are developed in a variety of software packages where the accuracy of the computer code is seldom verified. Although modeling guidelines recommend using state-of-the-art quality assurance and control methods for software engineering to verify models, the fields of pharmacoeconomics and health technology assessment (HTA) have yet to establish and adopt guidance on how to verify health and economic models. The objective of this paper is to introduce to our field the variety of methods the software engineering field uses to verify that software performs as expected. We identify how many of these methods can be incorporated in the development process of decision-analytic models in order to reduce errors and increase transparency. Given the breadth of methods used in software engineering, we recommend a more in-depth initiative to be undertaken (e.g., by an ISPOR-SMDM Task Force) to define the best practices for model verification in our field and to accelerate adoption. Establishing a general guidance for verifying models will benefit the pharmacoeconomics and HTA communities by increasing accuracy of computer programming, transparency, accessibility, sharing, understandability, and trust of models.

  11. Examining the Use of a Visual Analytics System for Sensemaking Tasks: Case Studies with Domain Experts.

    PubMed

    Kang, Youn-Ah; Stasko, J

    2012-12-01

    While the formal evaluation of systems in visual analytics is still relatively uncommon, particularly rare are case studies of prolonged system use by domain analysts working with their own data. Conducting case studies can be challenging, but it can be a particularly effective way to examine whether visual analytics systems are truly helping expert users to accomplish their goals. We studied the use of a visual analytics system for sensemaking tasks on documents by six analysts from a variety of domains. We describe their application of the system along with the benefits, issues, and problems that we uncovered. Findings from the studies identify features that visual analytics systems should emphasize as well as missing capabilities that should be addressed. These findings inform design implications for future systems.

  12. Task Models in the Digital Ocean

    ERIC Educational Resources Information Center

    DiCerbo, Kristen E.

    2014-01-01

    The Task Model is a description of each task in a workflow. It defines attributes associated with that task. The creation of task models becomes increasingly important as the assessment tasks become more complex. Explicitly delineating the impact of task variables on the ability to collect evidence and make inferences demands thoughtfulness from…

  13. Task-focused modeling in automated agriculture

    NASA Astrophysics Data System (ADS)

    Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack

    1993-01-01

    Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.

  14. Assessment of Matrix Multiplication Learning with a Rule-Based Analytical Model--"A Bayesian Network Representation"

    ERIC Educational Resources Information Center

    Zhang, Zhidong

    2016-01-01

    This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…

  15. Modeling of the Global Water Cycle - Analytical Models

    Treesearch

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  16. Task-Driven Comparison of Topic Models.

    PubMed

    Alexander, Eric; Gleicher, Michael

    2016-01-01

    Topic modeling, a method of statistically extracting thematic content from a large collection of texts, is used for a wide variety of tasks within text analysis. Though there are a growing number of tools and techniques for exploring single models, comparisons between models are generally reduced to a small set of numerical metrics. These metrics may or may not reflect a model's performance on the analyst's intended task, and can therefore be insufficient to diagnose what causes differences between models. In this paper, we explore task-centric topic model comparison, considering how we can both provide detail for a more nuanced understanding of differences and address the wealth of tasks for which topic models are used. We derive comparison tasks from single-model uses of topic models, which predominantly fall into the categories of understanding topics, understanding similarity, and understanding change. Finally, we provide several visualization techniques that facilitate these tasks, including buddy plots, which combine color and position encodings to allow analysts to readily view changes in document similarity.

  17. Research on safety evaluation model for in-vehicle secondary task driving.

    PubMed

    Jin, Lisheng; Xian, Huacai; Niu, Qingning; Bie, Jing

    2015-08-01

    This paper presents a new method for evaluating in-vehicle secondary task driving safety. There are five in-vehicle distracter tasks: tuning the radio to a local station, touching the touch-screen telephone menu to a certain song, talking with laboratory assistant, answering a telephone via Bluetooth headset, and finding the navigation system from Ipad4 computer. Forty young drivers completed the driving experiment on a driving simulator. Measures of fixations, saccades, and blinks are collected and analyzed. Based on the measures of driver eye movements which have significant difference between the baseline and secondary task driving conditions, the evaluation index system is built. The Analytic Network Process (ANP) theory is applied for determining the importance weight of the evaluation index in a fuzzy environment. On the basis of the importance weight of the evaluation index, Fuzzy Comprehensive Evaluation (FCE) method is utilized to evaluate the secondary task driving safety. Results show that driving with secondary tasks greatly distracts the driver's attention from road and the evaluation model built in this study could estimate driving safety effectively under different driving conditions. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  18. Aspirating Seal Development: Analytical Modeling and Seal Test Rig

    NASA Technical Reports Server (NTRS)

    Bagepalli, Bharat

    1996-01-01

    This effort is to develop large diameter (22 - 36 inch) Aspirating Seals for application in aircraft engines. Stein Seal Co. will be fabricating the 36-inch seal(s) for testing. GE's task is to establish a thorough understanding of the operation of Aspirating Seals through analytical modeling and full-scale testing. The two primary objectives of this project are to develop the analytical models of the aspirating seal system, to upgrade using GE's funds, GE's 50-inch seal test rig for testing the Aspirating Seal (back-to-back with a corresponding brush seal), test the aspirating seal(s) for seal closure, tracking and maneuver transients (tilt) at operating pressures and temperatures, and validate the analytical model. The objective of the analytical model development is to evaluate the transient and steady-state dynamic performance characteristics of the seal designed by Stein. The transient dynamic model uses a multi-body system approach: the Stator, Seal face and the rotor are treated as individual bodies with relative degrees of freedom. Initially, the thirty-six springs are represented as a single one trying to keep open the aspirating face. Stops (Contact elements) are provided between the stator and the seal (to compensate the preload in the fully-open position) and between the rotor face and Seal face (to detect rub). The secondary seal is considered as part of the stator. The film's load, damping and stiffness characteristics as functions of pressure and clearance are evaluated using a separate (NASA) code GFACE. Initially, a laminar flow theory is used. Special two-dimensional interpolation routines are written to establish exact film load and damping values at each integration time step. Additionally, other user-routines are written to read-in actual pressure, rpm, stator-growth and rotor growth data and, later, to transfer these as appropriate loads/motions in the system-dynamic model. The transient dynamic model evaluates the various motions, clearances

  19. Analysing task design and students' responses to context-based problems through different analytical frameworks

    NASA Astrophysics Data System (ADS)

    Broman, Karolina; Bernholt, Sascha; Parchmann, Ilka

    2015-05-01

    Background:Context-based learning approaches are used to enhance students' interest in, and knowledge about, science. According to different empirical studies, students' interest is improved by applying these more non-conventional approaches, while effects on learning outcomes are less coherent. Hence, further insights are needed into the structure of context-based problems in comparison to traditional problems, and into students' problem-solving strategies. Therefore, a suitable framework is necessary, both for the analysis of tasks and strategies. Purpose:The aim of this paper is to explore traditional and context-based tasks as well as students' responses to exemplary tasks to identify a suitable framework for future design and analyses of context-based problems. The paper discusses different established frameworks and applies the Higher-Order Cognitive Skills/Lower-Order Cognitive Skills (HOCS/LOCS) taxonomy and the Model of Hierarchical Complexity in Chemistry (MHC-C) to analyse traditional tasks and students' responses. Sample:Upper secondary students (n=236) at the Natural Science Programme, i.e. possible future scientists, are investigated to explore learning outcomes when they solve chemistry tasks, both more conventional as well as context-based chemistry problems. Design and methods:A typical chemistry examination test has been analysed, first the test items in themselves (n=36), and thereafter 236 students' responses to one representative context-based problem. Content analysis using HOCS/LOCS and MHC-C frameworks has been applied to analyse both quantitative and qualitative data, allowing us to describe different problem-solving strategies. Results:The empirical results show that both frameworks are suitable to identify students' strategies, mainly focusing on recall of memorized facts when solving chemistry test items. Almost all test items were also assessing lower order thinking. The combination of frameworks with the chemistry syllabus has been

  20. Recruitment of intuitive versus analytic thinking strategies affects the role of working memory in a gambling task.

    PubMed

    Gozzi, Marta; Cherubini, Paolo; Papagno, Costanza; Bricolo, Emanuela

    2011-05-01

    Previous studies found mixed results concerning the role of working memory (WM) in the gambling task (GT). Here, we aimed at reconciling inconsistencies by showing that the standard version of the task can be solved using intuitive strategies operating automatically, while more complex versions require analytic strategies drawing on executive functions. In Study 1, where good performance on the GT could be achieved using intuitive strategies, participants performed well both with and without a concurrent WM load. In Study 2, where analytical strategies were required to solve a more complex version of the GT, participants without WM load performed well, while participants with WM load performed poorly. In Study 3, where the complexity of the GT was further increased, participants in both conditions performed poorly. In addition to the standard performance measure, we used participants' subjective expected utility, showing that it differs from the standard measure in some important aspects.

  1. Human-centric predictive model of task difficulty for human-in-the-loop control tasks

    PubMed Central

    Majewicz Fey, Ann

    2018-01-01

    Quantitatively measuring the difficulty of a manipulation task in human-in-the-loop control systems is ill-defined. Currently, systems are typically evaluated through task-specific performance measures and post-experiment user surveys; however, these methods do not capture the real-time experience of human users. In this study, we propose to analyze and predict the difficulty of a bivariate pointing task, with a haptic device interface, using human-centric measurement data in terms of cognition, physical effort, and motion kinematics. Noninvasive sensors were used to record the multimodal response of human user for 14 subjects performing the task. A data-driven approach for predicting task difficulty was implemented based on several task-independent metrics. We compare four possible models for predicting task difficulty to evaluated the roles of the various types of metrics, including: (I) a movement time model, (II) a fusion model using both physiological and kinematic metrics, (III) a model only with kinematic metrics, and (IV) a model only with physiological metrics. The results show significant correlation between task difficulty and the user sensorimotor response. The fusion model, integrating user physiology and motion kinematics, provided the best estimate of task difficulty (R2 = 0.927), followed by a model using only kinematic metrics (R2 = 0.921). Both models were better predictors of task difficulty than the movement time model (R2 = 0.847), derived from Fitt’s law, a well studied difficulty model for human psychomotor control. PMID:29621301

  2. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  3. Medical Data Analytics Is Not a Simple Task.

    PubMed

    Babič, František; Vadovský, Michal; Paralič, Ján

    2018-01-01

    Data analytics represents a new chance for medical diagnosis and treatment to make it more effective and successful. This expectation is not so easy to achieve as it may look like at a first glance. The medical experts, doctors or general practitioners have their own vocabulary, they use specific terms and type of speaking. On the other side, data analysts have to understand the task and to select the right algorithms. The applicability of the results depends on the effectiveness of the interactions between those two worlds. This paper presents our experiences with various medical data samples in form of SWOT analysis. We identified the most important input attributes for the target diagnosis or extracted decision rules and analysed their interestingness with cooperating doctors, for most promising new cut-off values or an investigation of possible important relations hidden in data sample. In general, this type of knowledge can be used for clinical decision support, but it has to be evaluated on different samples, conditions and ideally in long-term studies. Sometimes, the interaction needed much more time than we expected at the beginning but our experiences are mostly positive.

  4. A chain-retrieval model for voluntary task switching.

    PubMed

    Vandierendonck, André; Demanet, Jelle; Liefooghe, Baptist; Verbruggen, Frederick

    2012-09-01

    To account for the findings obtained in voluntary task switching, this article describes and tests the chain-retrieval model. This model postulates that voluntary task selection involves retrieval of task information from long-term memory, which is then used to guide task selection and task execution. The model assumes that the retrieved information consists of acquired sequences (or chains) of tasks, that selection may be biased towards chains containing more task repetitions and that bottom-up triggered repetitions may overrule the intended task. To test this model, four experiments are reported. In Studies 1 and 2, sequences of task choices and the corresponding transition sequences (task repetitions or switches) were analyzed with the help of dependency statistics. The free parameters of the chain-retrieval model were estimated on the observed task sequences and these estimates were used to predict autocorrelations of tasks and transitions. In Studies 3 and 4, sequences of hand choices and their transitions were analyzed similarly. In all studies, the chain-retrieval model yielded better fits and predictions than statistical models of event choice. In applications to voluntary task switching (Studies 1 and 2), all three parameters of the model were needed to account for the data. When no task switching was required (Studies 3 and 4), the chain-retrieval model could account for the data with one or two parameters clamped to a neutral value. Implications for our understanding of voluntary task selection and broader theoretical implications are discussed. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Overlooking the obvious: a meta-analytic comparison of digit symbol coding tasks and other cognitive measures in schizophrenia.

    PubMed

    Dickinson, Dwight; Ramsey, Mary E; Gold, James M

    2007-05-01

    In focusing on potentially localizable cognitive impairments, the schizophrenia meta-analytic literature has overlooked the largest single impairment: on digit symbol coding tasks. To compare the magnitude of the schizophrenia impairment on coding tasks with impairments on other traditional neuropsychological instruments. MEDLINE and PsycINFO electronic databases and reference lists from identified articles. English-language studies from 1990 to present, comparing performance of patients with schizophrenia and healthy controls on coding tasks and cognitive measures representing at least 2 other cognitive domains. Of 182 studies identified, 40 met all criteria for inclusion in the meta-analysis. Means, standard deviations, and sample sizes were extracted for digit symbol coding and 36 other cognitive variables. In addition, we recorded potential clinical moderator variables, including chronicity/severity, medication status, age, and education, and potential study design moderators, including coding task variant, matching, and study publication date. Main analyses synthesized data from 37 studies comprising 1961 patients with schizophrenia and 1444 comparison subjects. Combination of mean effect sizes across studies by means of a random effects model yielded a weighted mean effect for digit symbol coding of g = -1.57 (95% confidence interval, -1.66 to -1.48). This effect compared with a grand mean effect of g = -0.98 and was significantly larger than effects for widely used measures of episodic memory, executive functioning, and working memory. Moderator variable analyses indicated that clinical and study design differences between studies had little effect on the coding task effect. Comparison with previous meta-analyses suggested that current results were representative of the broader literature. Subsidiary analysis of data from relatives of patients with schizophrenia also suggested prominent coding task impairments in this group. The 5-minute digit symbol coding

  6. A Chain-Retrieval Model for Voluntary Task Switching

    ERIC Educational Resources Information Center

    Vandierendonck, Andre; Demanet, Jelle; Liefooghe, Baptist; Verbruggen, Frederick

    2012-01-01

    To account for the findings obtained in voluntary task switching, this article describes and tests the chain-retrieval model. This model postulates that voluntary task selection involves retrieval of task information from long-term memory, which is then used to guide task selection and task execution. The model assumes that the retrieved…

  7. Analytical display design for flight tasks conducted under instrument meteorological conditions. [human factors engineering of pilot performance for display device design in instrument landing systems

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1976-01-01

    Paramount to proper utilization of electronic displays is a method for determining pilot-centered display requirements. Display design should be viewed fundamentally as a guidance and control problem which has interactions with the designer's knowledge of human psychomotor activity. From this standpoint, reliable analytical models of human pilots as information processors and controllers can provide valuable insight into the display design process. A relatively straightforward, nearly algorithmic procedure for deriving model-based, pilot-centered display requirements was developed and is presented. The optimal or control theoretic pilot model serves as the backbone of the design methodology, which is specifically directed toward the synthesis of head-down, electronic, cockpit display formats. Some novel applications of the optimal pilot model are discussed. An analytical design example is offered which defines a format for the electronic display to be used in a UH-1H helicopter in a landing approach task involving longitudinal and lateral degrees of freedom.

  8. What makes us think? A three-stage dual-process model of analytic engagement.

    PubMed

    Pennycook, Gordon; Fugelsang, Jonathan A; Koehler, Derek J

    2015-08-01

    The distinction between intuitive and analytic thinking is common in psychology. However, while often being quite clear on the characteristics of the two processes ('Type 1' processes are fast, autonomous, intuitive, etc. and 'Type 2' processes are slow, deliberative, analytic, etc.), dual-process theorists have been heavily criticized for being unclear on the factors that determine when an individual will think analytically or rely on their intuition. We address this issue by introducing a three-stage model that elucidates the bottom-up factors that cause individuals to engage Type 2 processing. According to the model, multiple Type 1 processes may be cued by a stimulus (Stage 1), leading to the potential for conflict detection (Stage 2). If successful, conflict detection leads to Type 2 processing (Stage 3), which may take the form of rationalization (i.e., the Type 1 output is verified post hoc) or decoupling (i.e., the Type 1 output is falsified). We tested key aspects of the model using a novel base-rate task where stereotypes and base-rate probabilities cued the same (non-conflict problems) or different (conflict problems) responses about group membership. Our results support two key predictions derived from the model: (1) conflict detection and decoupling are dissociable sources of Type 2 processing and (2) conflict detection sometimes fails. We argue that considering the potential stages of reasoning allows us to distinguish early (conflict detection) and late (decoupling) sources of analytic thought. Errors may occur at both stages and, as a consequence, bias arises from both conflict monitoring and decoupling failures. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Maximally Expressive Task Modeling

    NASA Technical Reports Server (NTRS)

    Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.

  10. Modeling Simple Driving Tasks with a One-Boundary Diffusion Model

    PubMed Central

    Ratcliff, Roger; Strayer, David

    2014-01-01

    A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620

  11. Blade loss transient dynamics analysis, volume 2. Task 2: Theoretical and analytical development. Task 3: Experimental verification

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.

  12. WHAEM: PROGRAM DOCUMENTATION FOR THE WELLHEAD ANALYTIC ELEMENT MODEL

    EPA Science Inventory

    The Wellhead Analytic Element Model (WhAEM) demonstrates a new technique for the definition of time-of-travel capture zones in relatively simple geohydrologic settings. he WhAEM package includes an analytic element model that uses superposition of (many) analytic solutions to gen...

  13. Analytical model for screening potential CO2 repositories

    USGS Publications Warehouse

    Okwen, R.T.; Stewart, M.T.; Cunningham, J.A.

    2011-01-01

    Assessing potential repositories for geologic sequestration of carbon dioxide using numerical models can be complicated, costly, and time-consuming, especially when faced with the challenge of selecting a repository from a multitude of potential repositories. This paper presents a set of simple analytical equations (model), based on the work of previous researchers, that could be used to evaluate the suitability of candidate repositories for subsurface sequestration of carbon dioxide. We considered the injection of carbon dioxide at a constant rate into a confined saline aquifer via a fully perforated vertical injection well. The validity of the analytical model was assessed via comparison with the TOUGH2 numerical model. The metrics used in comparing the two models include (1) spatial variations in formation pressure and (2) vertically integrated brine saturation profile. The analytical model and TOUGH2 show excellent agreement in their results when similar input conditions and assumptions are applied in both. The analytical model neglects capillary pressure and the pressure dependence of fluid properties. However, simulations in TOUGH2 indicate that little error is introduced by these simplifications. Sensitivity studies indicate that the agreement between the analytical model and TOUGH2 depends strongly on (1) the residual brine saturation, (2) the difference in density between carbon dioxide and resident brine (buoyancy), and (3) the relationship between relative permeability and brine saturation. The results achieved suggest that the analytical model is valid when the relationship between relative permeability and brine saturation is linear or quasi-linear and when the irreducible saturation of brine is zero or very small. ?? 2011 Springer Science+Business Media B.V.

  14. Predictive performance models and multiple task performance

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  15. Analytical Modeling of Groundwater Seepages to St. Lucie Estuary

    NASA Astrophysics Data System (ADS)

    Lee, J.; Yeh, G.; Hu, G.

    2008-12-01

    In this paper, six analytical models describing hydraulic interaction of stream-aquifer systems were applied to St Lucie Estuary (SLE) River Estuaries. These are analytical solutions for: (1) flow from a finite aquifer to a canal, (2) flow from an infinite aquifer to a canal, (3) the linearized Laplace system in a seepage surface, (4) wave propagation in the aquifer, (5) potential flow through stratified unconfined aquifers, and (6) flow through stratified confined aquifers. Input data for analytical solutions were obtained from monitoring wells and river stages at seepage-meter sites. Four transects in the study area are available: Club Med, Harbour Ridge, Lutz/MacMillan, and Pendarvis Cove located in the St. Lucie River. The analytical models were first calibrated with seepage meter measurements and then used to estimate of groundwater discharges into St. Lucie River. From this process, analytical relationships between the seepage rate and river stages and/or groundwater tables were established to predict the seasonal and monthly variation in groundwater seepage into SLE. It was found the seepage rate estimations by analytical models agreed well with measured data for some cases but only fair for some other cases. This is not unexpected because analytical solutions have some inherently simplified assumptions, which may be more valid for some cases than the others. From analytical calculations, it is possible to predict approximate seepage rates in the study domain when the assumptions underlying these analytical models are valid. The finite and infinite aquifer models and the linearized Laplace method are good for sites Pendarvis Cove and Lutz/MacMillian, but fair for the other two sites. The wave propagation model gave very good agreement in phase but only fairly agreement in magnitude for all four sites. The stratified unconfined and confined aquifer models gave similarly good agreements with measurements at three sites but poorly at the Club Med site. None of

  16. An analytical framework to assist decision makers in the use of forest ecosystem model predictions

    USGS Publications Warehouse

    Larocque, Guy R.; Bhatti, Jagtar S.; Ascough, J.C.; Liu, J.; Luckai, N.; Mailly, D.; Archambault, L.; Gordon, Andrew M.

    2011-01-01

    The predictions from most forest ecosystem models originate from deterministic simulations. However, few evaluation exercises for model outputs are performed by either model developers or users. This issue has important consequences for decision makers using these models to develop natural resource management policies, as they cannot evaluate the extent to which predictions stemming from the simulation of alternative management scenarios may result in significant environmental or economic differences. Various numerical methods, such as sensitivity/uncertainty analyses, or bootstrap methods, may be used to evaluate models and the errors associated with their outputs. However, the application of each of these methods carries unique challenges which decision makers do not necessarily understand; guidance is required when interpreting the output generated from each model. This paper proposes a decision flow chart in the form of an analytical framework to help decision makers apply, in an orderly fashion, different steps involved in examining the model outputs. The analytical framework is discussed with regard to the definition of problems and objectives and includes the following topics: model selection, identification of alternatives, modelling tasks and selecting alternatives for developing policy or implementing management scenarios. Its application is illustrated using an on-going exercise in developing silvicultural guidelines for a forest management enterprise in Ontario, Canada.

  17. An analytic performance model of disk arrays and its application

    NASA Technical Reports Server (NTRS)

    Lee, Edward K.; Katz, Randy H.

    1991-01-01

    As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.

  18. The Case for Adopting Server-side Analytics

    NASA Astrophysics Data System (ADS)

    Tino, C.; Holmes, C. P.; Feigelson, E.; Hurlburt, N. E.

    2017-12-01

    The standard method for accessing Earth and space science data relies on a scheme developed decades ago: data residing in one or many data stores must be parsed out and shipped via internet lines or physical transport to the researcher who in turn locally stores the data for analysis. The analyses tasks are varied and include visualization, parameterization, and comparison with or assimilation into physics models. In many cases this process is inefficient and unwieldy as the data sets become larger and demands on the analysis tasks become more sophisticated and complex. For about a decade, several groups have explored a new paradigm to this model. The names applied to the paradigm include "data analytics", "climate analytics", and "server-side analytics". The general concept is that in close network proximity to the data store there will be a tailored processing capability appropriate to the type and use of the data served. The user of the server-side analytics will operate on the data with numerical procedures. The procedures can be accessed via canned code, a scripting processor, or an analysis package such as Matlab, IDL or R. Results of the analytics processes will then be relayed via the internet to the user. In practice, these results will be at a much lower volume, easier to transport to and store locally by the user and easier for the user to interoperate with data sets from other remote data stores. The user can also iterate on the processing call to tailor the results as needed. A major component of server-side analytics could be to provide sets of tailored results to end users in order to eliminate the repetitive preconditioning that is both often required with these data sets and which drives much of the throughput challenges. NASA's Big Data Task Force studied this issue. This paper will present the results of this study including examples of SSAs that are being developed and demonstrated and suggestions for architectures that might be developed for

  19. Simple analytical model of a thermal diode

    NASA Astrophysics Data System (ADS)

    Kaushik, Saurabh; Kaushik, Sachin; Marathe, Rahul

    2018-05-01

    Recently there is a lot of attention given to manipulation of heat by constructing thermal devices such as thermal diodes, transistors and logic gates. Many of the models proposed have an asymmetry which leads to the desired effect. Presence of non-linear interactions among the particles is also essential. But, such models lack analytical understanding. Here we propose a simple, analytically solvable model of a thermal diode. Our model consists of classical spins in contact with multiple heat baths and constant external magnetic fields. Interestingly the magnetic field is the only parameter required to get the effect of heat rectification.

  20. Automated dynamic analytical model improvement for damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J. S.; Berman, A.

    1985-01-01

    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.

  1. Maximally Expressive Modeling of Operations Tasks

    NASA Technical Reports Server (NTRS)

    Jaap, John; Richardson, Lea; Davis, Elizabeth

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.

  2. Assessing Measurement Invariance for Spanish Sentence Repetition and Morphology Elicitation Tasks.

    PubMed

    Kapantzoglou, Maria; Thompson, Marilyn S; Gray, Shelley; Restrepo, M Adelaida

    2016-04-01

    The purpose of this study was to evaluate evidence supporting the construct validity of two grammatical tasks (sentence repetition, morphology elicitation) included in the Spanish Screener for Language Impairment in Children (Restrepo, Gorin, & Gray, 2013). We evaluated if the tasks measured the targeted grammatical skills in the same way across predominantly Spanish-speaking children with typical language development and those with primary language impairment. A multiple-group, confirmatory factor analytic approach was applied to examine factorial invariance in a sample of 307 predominantly Spanish-speaking children (177 with typical language development; 130 with primary language impairment). The 2 newly developed grammatical tasks were modeled as measures in a unidimensional confirmatory factor analytic model along with 3 well-established grammatical measures from the Clinical Evaluation of Language Fundamentals-Fourth Edition, Spanish (Wiig, Semel, & Secord, 2006). Results suggest that both new tasks measured the construct of grammatical skills for both language-ability groups in an equivalent manner. There was no evidence of bias related to children's language status for the Spanish Screener for Language Impairment in Children Sentence Repetition or Morphology Elicitation tasks. Results provide support for the validity of the new tasks as measures of grammatical skills.

  3. L-shaped piezoelectric motor--part II: analytical modeling.

    PubMed

    Avirovik, Dragan; Karami, M Amin; Inman, Daniel; Priya, Shashank

    2012-01-01

    This paper develops an analytical model for an L-shaped piezoelectric motor. The motor structure has been described in detail in Part I of this study. The coupling of the bending vibration mode of the bimorphs results in an elliptical motion at the tip. The emphasis of this paper is on the development of a precise analytical model which can predict the dynamic behavior of the motor based on its geometry. The motor was first modeled mechanically to identify the natural frequencies and mode shapes of the structure. Next, an electromechanical model of the motor was developed to take into account the piezoelectric effect, and dynamics of L-shaped piezoelectric motor were obtained as a function of voltage and frequency. Finally, the analytical model was validated by comparing it to experiment results and the finite element method (FEM). © 2012 IEEE

  4. Linking normative models of natural tasks to descriptive models of neural response.

    PubMed

    Jaini, Priyank; Burge, Johannes

    2017-10-01

    Understanding how nervous systems exploit task-relevant properties of sensory stimuli to perform natural tasks is fundamental to the study of perceptual systems. However, there are few formal methods for determining which stimulus properties are most useful for a given natural task. As a consequence, it is difficult to develop principled models for how to compute task-relevant latent variables from natural signals, and it is difficult to evaluate descriptive models fit to neural response. Accuracy maximization analysis (AMA) is a recently developed Bayesian method for finding the optimal task-specific filters (receptive fields). Here, we introduce AMA-Gauss, a new faster form of AMA that incorporates the assumption that the class-conditional filter responses are Gaussian distributed. Then, we use AMA-Gauss to show that its assumptions are justified for two fundamental visual tasks: retinal speed estimation and binocular disparity estimation. Next, we show that AMA-Gauss has striking formal similarities to popular quadratic models of neural response: the energy model and the generalized quadratic model (GQM). Together, these developments deepen our understanding of why the energy model of neural response have proven useful, improve our ability to evaluate results from subunit model fits to neural data, and should help accelerate psychophysics and neuroscience research with natural stimuli.

  5. ANALYTICAL ELEMENT MODELING OF COASTAL AQUIFERS

    EPA Science Inventory

    Four topics were studied concerning the modeling of groundwater flow in coastal aquifers with analytic elements: (1) practical experience was obtained by constructing a groundwater model of the shallow aquifers below the Delmarva Peninsula USA using the commercial program MVAEM; ...

  6. A non-grey analytical model for irradiated atmospheres. II. Analytical vs. numerical solutions

    NASA Astrophysics Data System (ADS)

    Parmentier, Vivien; Guillot, Tristan; Fortney, Jonathan J.; Marley, Mark S.

    2015-02-01

    Context. The recent discovery and characterization of the diversity of the atmospheres of exoplanets and brown dwarfs calls for the development of fast and accurate analytical models. Aims: We wish to assess the goodness of the different approximations used to solve the radiative transfer problem in irradiated atmospheres analytically, and we aim to provide a useful tool for a fast computation of analytical temperature profiles that remains correct over a wide range of atmospheric characteristics. Methods: We quantify the accuracy of the analytical solution derived in paper I for an irradiated, non-grey atmosphere by comparing it to a state-of-the-art radiative transfer model. Then, using a grid of numerical models, we calibrate the different coefficients of our analytical model for irradiated solar-composition atmospheres of giant exoplanets and brown dwarfs. Results: We show that the so-called Eddington approximation used to solve the angular dependency of the radiation field leads to relative errors of up to ~5% on the temperature profile. For grey or semi-grey atmospheres (i.e., when the visible and thermal opacities, respectively, can be considered independent of wavelength), we show that the presence of a convective zone has a limited effect on the radiative atmosphere above it and leads to modifications of the radiative temperature profile of approximately ~2%. However, for realistic non-grey planetary atmospheres, the presence of a convective zone that extends to optical depths smaller than unity can lead to changes in the radiative temperature profile on the order of 20% or more. When the convective zone is located at deeper levels (such as for strongly irradiated hot Jupiters), its effect on the radiative atmosphere is again on the same order (~2%) as in the semi-grey case. We show that the temperature inversion induced by a strong absorber in the optical, such as TiO or VO is mainly due to non-grey thermal effects reducing the ability of the upper

  7. Analytic Networks in Music Task Definition.

    ERIC Educational Resources Information Center

    Piper, Richard M.

    For a student to acquire the conceptual systems of a discipline, the designer must reflect that structure or analytic network in his curriculum. The four networks identified for music and used in the development of the Southwest Regional Laboratory (SWRL) Music Program are the variable-value, the whole-part, the process-stage, and the class-member…

  8. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1991-01-01

    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.

  9. Atrial Model Development and Prototype Simulations: CRADA Final Report on Tasks 3 and 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Hara, T.; Zhang, X.; Villongco, C.

    2016-10-28

    The goal of this CRADA was to develop essential tools needed to simulate human atrial electrophysiology in 3-dimensions using an anatomical image-based anatomy and physiologically detailed human cellular model. The atria were modeled as anisotropic, representing the preferentially longitudinal electrical coupling between myocytes. Across the entire anatomy, cellular electrophysiology was heterogeneous, with left and right atrial myocytes defined differently. Left and right cell types for the “control” case of sinus rhythm (SR) was compared with remodeled electrophysiology and calcium cycling characteristics of chronic atrial fibrillation (cAF). The effects of Isoproterenol (ISO), a beta-adrenergic agonist that represents the functional consequences ofmore » PKA phosphorylation of various ion channels and transporters, was also simulated in SR and cAF to represent atrial activity under physical or emotional stress. Results and findings from Tasks 3 & 4 are described. Tasks 3 and 4 are, respectively: Input parameters prepared for a Cardioid simulation; Report including recommendations for additional scenario development and post-processing analytic strategy.« less

  10. Analytic modeling of aerosol size distributions

    NASA Technical Reports Server (NTRS)

    Deepack, A.; Box, G. P.

    1979-01-01

    Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.

  11. WELLHEAD ANALYTIC ELEMENT MODEL FOR WINDOWS

    EPA Science Inventory

    WhAEM2000 (wellhead analytic element model for Win 98/00/NT/XP) is a public domain, ground-water flow model designed to facilitate capture zone delineation and protection area mapping in support of the State's and Tribe's Wellhead Protection Programs (WHPP) and Source Water Asses...

  12. ESTIMATING UNCERTAINITIES IN FACTOR ANALYTIC MODELS

    EPA Science Inventory

    When interpreting results from factor analytic models as used in receptor modeling, it is important to quantify the uncertainties in those results. For example, if the presence of a species on one of the factors is necessary to interpret the factor as originating from a certain ...

  13. The heuristic-analytic theory of reasoning: extension and evaluation.

    PubMed

    Evans, Jonathan St B T

    2006-06-01

    An extensively revised heuristic-analytic theory of reasoning is presented incorporating three principles of hypothetical thinking. The theory assumes that reasoning and judgment are facilitated by the formation of epistemic mental models that are generated one at a time (singularity principle) by preconscious heuristic processes that contextualize problems in such a way as to maximize relevance to current goals (relevance principle). Analytic processes evaluate these models but tend to accept them unless there is good reason to reject them (satisficing principle). At a minimum, analytic processing of models is required so as to generate inferences or judgments relevant to the task instructions, but more active intervention may result in modification or replacement of default models generated by the heuristic system. Evidence for this theory is provided by a review of a wide range of literature on thinking and reasoning.

  14. Stakeholder perspectives on decision-analytic modeling frameworks to assess genetic services policy.

    PubMed

    Guzauskas, Gregory F; Garrison, Louis P; Stock, Jacquie; Au, Sylvia; Doyle, Debra Lochner; Veenstra, David L

    2013-01-01

    Genetic services policymakers and insurers often make coverage decisions in the absence of complete evidence of clinical utility and under budget constraints. We evaluated genetic services stakeholder opinions on the potential usefulness of decision-analytic modeling to inform coverage decisions, and asked them to identify genetic tests for decision-analytic modeling studies. We presented an overview of decision-analytic modeling to members of the Western States Genetic Services Collaborative Reimbursement Work Group and state Medicaid representatives and conducted directed content analysis and an anonymous survey to gauge their attitudes toward decision-analytic modeling. Participants also identified and prioritized genetic services for prospective decision-analytic evaluation. Participants expressed dissatisfaction with current processes for evaluating insurance coverage of genetic services. Some participants expressed uncertainty about their comprehension of decision-analytic modeling techniques. All stakeholders reported openness to using decision-analytic modeling for genetic services assessments. Participants were most interested in application of decision-analytic concepts to multiple-disorder testing platforms, such as next-generation sequencing and chromosomal microarray. Decision-analytic modeling approaches may provide a useful decision tool to genetic services stakeholders and Medicaid decision-makers.

  15. Test of a potential link between analytic and nonanalytic category learning and automatic, effortful processing.

    PubMed

    Tracy, J I; Pinsk, M; Helverson, J; Urban, G; Dietz, T; Smith, D J

    2001-08-01

    The link between automatic and effortful processing and nonanalytic and analytic category learning was evaluated in a sample of 29 college undergraduates using declarative memory, semantic category search, and pseudoword categorization tasks. Automatic and effortful processing measures were hypothesized to be associated with nonanalytic and analytic categorization, respectively. Results suggested that contrary to prediction strong criterion-attribute (analytic) responding on the pseudoword categorization task was associated with strong automatic, implicit memory encoding of frequency-of-occurrence information. Data are discussed in terms of the possibility that criterion-attribute category knowledge, once established, may be expressed with few attentional resources. The data indicate that attention resource requirements, even for the same stimuli and task, vary depending on the category rule system utilized. Also, the automaticity emerging from familiarity with analytic category exemplars is very different from the automaticity arising from extensive practice on a semantic category search task. The data do not support any simple mapping of analytic and nonanalytic forms of category learning onto the automatic and effortful processing dichotomy and challenge simple models of brain asymmetries for such procedures. Copyright 2001 Academic Press.

  16. Modeling Working Memory Tasks on the Item Level

    ERIC Educational Resources Information Center

    Luo, Dasen; Chen, Guopeng; Zen, Fanlin; Murray, Bronwyn

    2010-01-01

    Item responses to Digit Span and Letter-Number Sequencing were analyzed to develop a better-refined model of the two working memory tasks using the finite mixture (FM) modeling method. Models with ordinal latent traits were found to better account for the independent sources of the variability in the tasks than those with continuous traits, and…

  17. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  18. Elliptic-cylindrical analytical flux-rope model for ICMEs

    NASA Astrophysics Data System (ADS)

    Nieves-Chinchilla, T.; Linton, M.; Hidalgo, M. A. U.; Vourlidas, A.

    2016-12-01

    We present an analytical flux-rope model for realistic magnetic structures embedded in Interplanetary Coronal Mass Ejections. The framework of this model was established by Nieves-Chinchilla et al. (2016) with the circular-cylindrical analytical flux rope model and under the concept developed by Hidalgo et al. (2002). Elliptic-cylindrical geometry establishes the first-grade of complexity of a series of models. The model attempts to describe the magnetic flux rope topology with distorted cross-section as a possible consequence of the interaction with the solar wind. In this model, the flux rope is completely described in the non-euclidean geometry. The Maxwell equations are solved using tensor calculus consistently with the geometry chosen, invariance along the axial component, and with the only assumption of no radial current density. The model is generalized in terms of the radial dependence of the poloidal current density component and axial current density component. The misalignment between current density and magnetic field is studied in detail for the individual cases of different pairs of indexes for the axial and poloidal current density components. This theoretical analysis provides a map of the force distribution inside of the flux-rope. The reconstruction technique has been adapted to the model and compared with in situ ICME set of events with different in situ signatures. The successful result is limited to some cases with clear in-situ signatures of distortion. However, the model adds a piece in the puzzle of the physical-analytical representation of these magnetic structures. Other effects such as axial curvature, expansion and/or interaction could be incorporated in the future to fully understand the magnetic structure. Finally, the mathematical formulation of this model opens the door to the next model: toroidal flux rope analytical model.

  19. Decomposing task-switching costs with the diffusion model.

    PubMed

    Schmitz, Florian; Voss, Andreas

    2012-02-01

    In four experiments, task-switching processes were investigated with variants of the alternating runs paradigm and the explicit cueing paradigm. The classical diffusion model for binary decisions (Ratcliff, 1978) was used to dissociate different components of task-switching costs. Findings can be reconciled with the view that task-switching processes take place in successive phases as postulated by multiple-components models of task switching (e.g., Mayr & Kliegl, 2003; Ruthruff, Remington, & Johnston, 2001). At an earlier phase, task-set reconfiguration (Rogers & Monsell, 1995) or cue-encoding (Schneider & Logan, 2005) takes place, at a later phase, the response is selected in accord with constraints set in the first phase. Inertia effects (Allport, Styles, & Hsieh, 1994; Allport & Wylie, 2000) were shown to affect this later stage. Additionally, findings support the notion that response caution contributes to both global as well as to local switching costs when task switches are predictable.

  20. Semi-analytical model of cross-borehole flow experiments for fractured medium characterization

    NASA Astrophysics Data System (ADS)

    Roubinet, D.; Irving, J.; Day-Lewis, F. D.

    2014-12-01

    The study of fractured rocks is extremely important in a wide variety of research fields where the fractures and faults can represent either rapid access to some resource of interest or potential pathways for the migration of contaminants in the subsurface. Identification of their presence and determination of their properties are critical and challenging tasks that have led to numerous fracture characterization methods. Among these methods, cross-borehole flowmeter analysis aims to evaluate fracture connections and hydraulic properties from vertical-flow-velocity measurements conducted in one or more observation boreholes under forced hydraulic conditions. Previous studies have demonstrated that analysis of these data can provide important information on fracture connectivity, transmissivity, and storativity. Estimating these properties requires the development of analytical and/or numerical modeling tools that are well adapted to the complexity of the problem. Quantitative analysis of cross-borehole flowmeter experiments, in particular, requires modeling formulations that: (i) can be adapted to a variety of fracture and experimental configurations; (ii) can take into account interactions between the boreholes because their radii of influence may overlap; and (iii) can be readily cast into an inversion framework that allows for not only the estimation of fracture hydraulic properties, but also an assessment of estimation error. To this end, we present a new semi-analytical formulation for cross-borehole flow in fractured media that links transient vertical-flow velocities measured in one or a series of observation wells during hydraulic forcing to the transmissivity and storativity of the fractures intersected by these wells. Our model addresses the above needs and provides a flexible and computationally efficient semi-analytical framework having strong potential for future adaptation to more complex configurations. The proposed modeling approach is demonstrated

  1. Cancel and rethink in the Wason selection task: further evidence for the heuristic-analytic dual process theory.

    PubMed

    Wada, Kazushige; Nittono, Hiroshi

    2004-06-01

    The reasoning process in the Wason selection task was examined by measuring card inspection times in the letter-number and drinking-age problems. 24 students were asked to solve the problems presented on a computer screen. Only the card touched with a mouse pointer was visible, and the total exposure time of each card was measured. Participants were allowed to cancel their previous selections at any time. Although rethinking was encouraged, the cards once selected were rarely cancelled (10% of the total selections). Moreover, most of the cancelled cards were reselected (89% of the total cancellations). Consistent with previous findings, inspection times were longer for selected cards than for nonselected cards. These results suggest that card selections are determined largely by initial heuristic processes and rarely reversed by subsequent analytic processes. The present study gives further support for the heuristic-analytic dual process theory.

  2. University Macro Analytic Simulation Model.

    ERIC Educational Resources Information Center

    Baron, Robert; Gulko, Warren

    The University Macro Analytic Simulation System (UMASS) has been designed as a forecasting tool to help university administrators budgeting decisions. Alternative budgeting strategies can be tested on a computer model and then an operational alternative can be selected on the basis of the most desirable projected outcome. UMASS uses readily…

  3. Oral Motor Abilities Are Task Dependent: A Factor Analytic Approach to Performance Rate.

    PubMed

    Staiger, Anja; Schölderle, Theresa; Brendel, Bettina; Bötzel, Kai; Ziegler, Wolfram

    2017-01-01

    Measures of performance rates in speech-like or volitional nonspeech oral motor tasks are frequently used to draw inferences about articulation rate abnormalities in patients with neurologic movement disorders. The study objective was to investigate the structural relationship between rate measures of speech and of oral motor behaviors different from speech. A total of 130 patients with neurologic movement disorders and 130 healthy subjects participated in the study. Rate data was collected for oral reading (speech), rapid syllable repetition (speech-like), and rapid single articulator movements (nonspeech). The authors used factor analysis to determine whether the different rate variables reflect the same or distinct constructs. The behavioral data were most appropriately captured by a measurement model in which the different task types loaded onto separate latent variables. The data on oral motor performance rates show that speech tasks and oral motor tasks such as rapid syllable repetition or repetitive single articulator movements measure separate traits.

  4. 33 CFR 385.33 - Revisions to models and analytical tools.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Management District, and other non-Federal sponsors shall rely on the best available science including models..., and assessment of projects. The selection of models and analytical tools shall be done in consultation... system-wide simulation models and analytical tools used in the evaluation and assessment of projects, and...

  5. A simple, analytical, axisymmetric microburst model for downdraft estimation

    NASA Technical Reports Server (NTRS)

    Vicroy, Dan D.

    1991-01-01

    A simple analytical microburst model was developed for use in estimating vertical winds from horizontal wind measurements. It is an axisymmetric, steady state model that uses shaping functions to satisfy the mass continuity equation and simulate boundary layer effects. The model is defined through four model variables: the radius and altitude of the maximum horizontal wind, a shaping function variable, and a scale factor. The model closely agrees with a high fidelity analytical model and measured data, particularily in the radial direction and at lower altitudes. At higher altitudes, the model tends to overestimate the wind magnitude relative to the measured data.

  6. Decomposing Task-Switching Costs with the Diffusion Model

    ERIC Educational Resources Information Center

    Schmitz, Florian; Voss, Andreas

    2012-01-01

    In four experiments, task-switching processes were investigated with variants of the alternating runs paradigm and the explicit cueing paradigm. The classical diffusion model for binary decisions (Ratcliff, 1978) was used to dissociate different components of task-switching costs. Findings can be reconciled with the view that task-switching…

  7. The Analytical Limits of Modeling Short Diffusion Timescales

    NASA Astrophysics Data System (ADS)

    Bradshaw, R. W.; Kent, A. J.

    2016-12-01

    Chemical and isotopic zoning in minerals is widely used to constrain the timescales of magmatic processes such as magma mixing and crystal residence, etc. via diffusion modeling. Forward modeling of diffusion relies on fitting diffusion profiles to measured compositional gradients. However, an individual measurement is essentially an average composition for a segment of the gradient defined by the spatial resolution of the analysis. Thus there is the potential for the analytical spatial resolution to limit the timescales that can be determined for an element of given diffusivity, particularly where the scale of the gradient approaches that of the measurement. Here we use a probabilistic modeling approach to investigate the effect of analytical spatial resolution on estimated timescales from diffusion modeling. Our method investigates how accurately the age of a synthetic diffusion profile can be obtained by modeling an "unknown" profile derived from discrete sampling of the synthetic compositional gradient at a given spatial resolution. We also include the effects of analytical uncertainty and the position of measurements relative to the diffusion gradient. We apply this method to the spatial resolutions of common microanalytical techniques (LA-ICP-MS, SIMS, EMP, NanoSIMS). Our results confirm that for a given diffusivity, higher spatial resolution gives access to shorter timescales, and that each analytical spacing has a minimum timescale, below which it overestimates the timescale. For example, for Ba diffusion in plagioclase at 750 °C timescales are accurate (within 20%) above 10, 100, 2,600, and 71,000 years at 0.3, 1, 5, and 25 mm spatial resolution, respectively. For Sr diffusion in plagioclase at 750 °C, timescales are accurate above 0.02, 0.2, 4, and 120 years at the same spatial resolutions. Our results highlight the importance of selecting appropriate analytical techniques to estimate accurate diffusion-based timescales.

  8. Note: Model identification and analysis of bivalent analyte surface plasmon resonance data.

    PubMed

    Tiwari, Purushottam Babu; Üren, Aykut; He, Jin; Darici, Yesim; Wang, Xuewen

    2015-10-01

    Surface plasmon resonance (SPR) is a widely used, affinity based, label-free biophysical technique to investigate biomolecular interactions. The extraction of rate constants requires accurate identification of the particular binding model. The bivalent analyte model involves coupled non-linear differential equations. No clear procedure to identify the bivalent analyte mechanism has been established. In this report, we propose a unique signature for the bivalent analyte model. This signature can be used to distinguish the bivalent analyte model from other biphasic models. The proposed method is demonstrated using experimentally measured SPR sensorgrams.

  9. Interactive Visual Analytics Approch for Exploration of Geochemical Model Simulations with Different Parameter Sets

    NASA Astrophysics Data System (ADS)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2015-04-01

    Many geoscience applications can benefit from testing many combinations of input parameters for geochemical simulation models. It is, however, a challenge to screen the input and output data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we propose a Visual Analytics approach that has been developed in an ongoing collaboration between computer science and geoscience researchers. Our Visual Analytics approach uses visualization methods of hierarchical horizontal axis, multi-factor stacked bar charts and interactive semi-automated filtering for input and output data together with automatic sensitivity analysis. This guides the users towards significant relationships. We implement our approach as an interactive data exploration tool. It is designed with flexibility in mind, so that a diverse set of tasks such as inverse modeling, sensitivity analysis and model parameter refinement can be supported. Here we demonstrate the capabilities of our approach by two examples for gas storage applications. For the first example our Visual Analytics approach enabled the analyst to observe how the element concentrations change around previously established baselines in response to thousands of different combinations of mineral phases. This supported combinatorial inverse modeling for interpreting observations about the chemical composition of the formation fluids at the Ketzin pilot site for CO2 storage. The results indicate that, within the experimental error range, the formation fluid cannot be considered at local thermodynamical equilibrium with the mineral assemblage of the reservoir rock. This is a valuable insight from the predictive geochemical modeling for the Ketzin site. For the second example our approach supports sensitivity analysis for a reaction involving the reductive dissolution of pyrite with formation of pyrrothite in presence of gaseous hydrogen. We determine that this reaction

  10. Let's Go Off the Grid: Subsurface Flow Modeling With Analytic Elements

    NASA Astrophysics Data System (ADS)

    Bakker, M.

    2017-12-01

    Subsurface flow modeling with analytic elements has the major advantage that no grid or time stepping are needed. Analytic element formulations exist for steady state and transient flow in layered aquifers and unsaturated flow in the vadose zone. Analytic element models are vector-based and consist of points, lines and curves that represent specific features in the subsurface. Recent advances allow for the simulation of partially penetrating wells and multi-aquifer wells, including skin effect and wellbore storage, horizontal wells of poly-line shape including skin effect, sharp changes in subsurface properties, and surface water features with leaky beds. Input files for analytic element models are simple, short and readable, and can easily be generated from, for example, GIS databases. Future plans include the incorporation of analytic element in parts of grid-based models where additional detail is needed. This presentation will give an overview of advanced flow features that can be modeled, many of which are implemented in free and open-source software.

  11. Analytical Chemistry in Russia.

    PubMed

    Zolotov, Yuri

    2016-09-06

    Research in Russian analytical chemistry (AC) is carried out on a significant scale, and the analytical service solves practical tasks of geological survey, environmental protection, medicine, industry, agriculture, etc. The education system trains highly skilled professionals in AC. The development and especially manufacturing of analytical instruments should be improved; in spite of this, there are several good domestic instruments and other satisfy some requirements. Russian AC has rather good historical roots.

  12. Empirical testing of an analytical model predicting electrical isolation of photovoltaic models

    NASA Astrophysics Data System (ADS)

    Garcia, A., III; Minning, C. P.; Cuddihy, E. F.

    A major design requirement for photovoltaic modules is that the encapsulation system be capable of withstanding large DC potentials without electrical breakdown. Presented is a simple analytical model which can be used to estimate material thickness to meet this requirement for a candidate encapsulation system or to predict the breakdown voltage of an existing module design. A series of electrical tests to verify the model are described in detail. The results of these verification tests confirmed the utility of the analytical model for preliminary design of photovoltaic modules.

  13. NCI-FDA Interagency Oncology Task Force Workshop Provides Guidance for Analytical Validation of Protein-based Multiplex Assays | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    An NCI-FDA Interagency Oncology Task Force (IOTF) Molecular Diagnostics Workshop was held on October 30, 2008 in Cambridge, MA, to discuss requirements for analytical validation of protein-based multiplex technologies in the context of its intended use. This workshop developed through NCI's Clinical Proteomic Technologies for Cancer initiative and the FDA focused on technology-specific analytical validation processes to be addressed prior to use in clinical settings. In making this workshop unique, a case study approach was used to discuss issues related to

  14. Analytical formulation of cellular automata rules using data models

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.

    2009-05-01

    We present a unique method for converting traditional cellular automata (CA) rules into analytical function form. CA rules have been successfully used for morphological image processing and volumetric shape recognition and classification. Further, the use of CA rules as analog models to the physical and biological sciences can be significantly extended if analytical (as opposed to discrete) models could be formulated. We show that such transformations are possible. We use as our example John Horton Conway's famous "Game of Life" rule set. We show that using Data Modeling, we are able to derive both polynomial and bi-spectrum models of the IF-THEN rules that yield equivalent results. Further, we demonstrate that the "Game of Life" rule set can be modeled using the multi-fluxion, yielding a closed form nth order derivative and integral. All of the demonstrated analytical forms of the CA rule are general and applicable to real-time use.

  15. Analytic barrage attack model. Final report, January 1986-January 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    St Ledger, J.W.; Naegeli, R.E.; Dowden, N.A.

    An analytic model is developed for a nuclear barrage attack, assuming weapons with no aiming error and a cookie-cutter damage function. The model is then extended with approximations for the effects of aiming error and distance damage sigma. The final result is a fast running model which calculates probability of damage for a barrage attack. The probability of damage is accurate to within seven percent or better, for weapon reliabilities of 50 to 100 percent, distance damage sigmas of 0.5 or less, and zero to very large circular error probabilities. FORTRAN 77 coding is included in the report for themore » analytic model and for a numerical model used to check the analytic results.« less

  16. Analytical methodology for determination of helicopter IFR precision approach requirements. [pilot workload and acceptance level

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.

    1980-01-01

    A systematic analytical approach to the determination of helicopter IFR precision approach requirements is formulated. The approach is based upon the hypothesis that pilot acceptance level or opinion rating of a given system is inversely related to the degree of pilot involvement in the control task. A nonlinear simulation of the helicopter approach to landing task incorporating appropriate models for UH-1H aircraft, the environmental disturbances and the human pilot was developed as a tool for evaluating the pilot acceptance hypothesis. The simulated pilot model is generic in nature and includes analytical representation of the human information acquisition, processing, and control strategies. Simulation analyses in the flight director mode indicate that the pilot model used is reasonable. Results of the simulation are used to identify candidate pilot workload metrics and to test the well known performance-work-load relationship. A pilot acceptance analytical methodology is formulated as a basis for further investigation, development and validation.

  17. Significance Testing in Confirmatory Factor Analytic Models.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; Hocevar, Dennis

    Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…

  18. Role of optimization in the human dynamics of task execution

    NASA Astrophysics Data System (ADS)

    Cajueiro, Daniel O.; Maldonado, Wilfredo L.

    2008-03-01

    In order to explain the empirical evidence that the dynamics of human activity may not be well modeled by Poisson processes, a model based on queuing processes was built in the literature [A. L. Barabasi, Nature (London) 435, 207 (2005)]. The main assumption behind that model is that people execute their tasks based on a protocol that first executes the high priority item. In this context, the purpose of this paper is to analyze the validity of that hypothesis assuming that people are rational agents that make their decisions in order to minimize the cost of keeping nonexecuted tasks on the list. Therefore, we build and analytically solve a dynamic programming model with two priority types of tasks and show that the validity of this hypothesis depends strongly on the structure of the instantaneous costs that a person has to face if a given task is kept on the list for more than one period. Moreover, one interesting finding is that in one of the situations the protocol used to execute the tasks generates complex one-dimensional dynamics.

  19. Modeling task-specific neuronal ensembles improves decoding of grasp

    NASA Astrophysics Data System (ADS)

    Smith, Ryan J.; Soares, Alcimar B.; Rouse, Adam G.; Schieber, Marc H.; Thakor, Nitish V.

    2018-06-01

    Objective. Dexterous movement involves the activation and coordination of networks of neuronal populations across multiple cortical regions. Attempts to model firing of individual neurons commonly treat the firing rate as directly modulating with motor behavior. However, motor behavior may additionally be associated with modulations in the activity and functional connectivity of neurons in a broader ensemble. Accounting for variations in neural ensemble connectivity may provide additional information about the behavior being performed. Approach. In this study, we examined neural ensemble activity in primary motor cortex (M1) and premotor cortex (PM) of two male rhesus monkeys during performance of a center-out reach, grasp and manipulate task. We constructed point process encoding models of neuronal firing that incorporated task-specific variations in the baseline firing rate as well as variations in functional connectivity with the neural ensemble. Models were evaluated both in terms of their encoding capabilities and their ability to properly classify the grasp being performed. Main results. Task-specific ensemble models correctly predicted the performed grasp with over 95% accuracy and were shown to outperform models of neuronal activity that assume only a variable baseline firing rate. Task-specific ensemble models exhibited superior decoding performance in 82% of units in both monkeys (p  <  0.01). Inclusion of ensemble activity also broadly improved the ability of models to describe observed spiking. Encoding performance of task-specific ensemble models, measured by spike timing predictability, improved upon baseline models in 62% of units. Significance. These results suggest that additional discriminative information about motor behavior found in the variations in functional connectivity of neuronal ensembles located in motor-related cortical regions is relevant to decode complex tasks such as grasping objects, and may serve the basis for more

  20. On the Modeling and Management of Cloud Data Analytics

    NASA Astrophysics Data System (ADS)

    Castillo, Claris; Tantawi, Asser; Steinder, Malgorzata; Pacifici, Giovanni

    A new era is dawning where vast amount of data is subjected to intensive analysis in a cloud computing environment. Over the years, data about a myriad of things, ranging from user clicks to galaxies, have been accumulated, and continue to be collected, on storage media. The increasing availability of such data, along with the abundant supply of compute power and the urge to create useful knowledge, gave rise to a new data analytics paradigm in which data is subjected to intensive analysis, and additional data is created in the process. Meanwhile, a new cloud computing environment has emerged where seemingly limitless compute and storage resources are being provided to host computation and data for multiple users through virtualization technologies. Such a cloud environment is becoming the home for data analytics. Consequently, providing good performance at run-time to data analytics workload is an important issue for cloud management. In this paper, we provide an overview of the data analytics and cloud environment landscapes, and investigate the performance management issues related to running data analytics in the cloud. In particular, we focus on topics such as workload characterization, profiling analytics applications and their pattern of data usage, cloud resource allocation, placement of computation and data and their dynamic migration in the cloud, and performance prediction. In solving such management problems one relies on various run-time analytic models. We discuss approaches for modeling and optimizing the dynamic data analytics workload in the cloud environment. All along, we use the Map-Reduce paradigm as an illustration of data analytics.

  1. Strategy generalization across orientation tasks: testing a computational cognitive model.

    PubMed

    Gunzelmann, Glenn

    2008-07-08

    Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human performance was measured on an orientation task requiring participants to identify the location of a target either on a map (find-on-map) or within an egocentric view of a space (find-in-scene). A general strategy instantiated in a computational cognitive model of the find-on-map task, based on the results from Gunzelmann and Anderson (2006), was adapted to perform both tasks and used to generate performance predictions for a new study. The qualitative fit of the model to the human data supports the view that participants were able to tailor a general strategy to the requirements of particular spatial tasks. The quantitative differences between the predictions of the model and the performance of human participants in the new experiment expose individual differences in sample populations. The model provides a means of accounting for those differences and a framework for understanding how human spatial abilities are applied to naturalistic spatial tasks that involve reasoning with maps. 2008 Cognitive Science Society, Inc.

  2. An Integrated Model of Cognitive Control in Task Switching

    ERIC Educational Resources Information Center

    Altmann, Erik M.; Gray, Wayne D.

    2008-01-01

    A model of cognitive control in task switching is developed in which controlled performance depends on the system maintaining access to a code in episodic memory representing the most recently cued task. The main constraint on access to the current task code is proactive interference from old task codes. This interference and the mechanisms that…

  3. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.

    PubMed

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-02-08

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  4. Task Delegation Based Access Control Models for Workflow Systems

    NASA Astrophysics Data System (ADS)

    Gaaloul, Khaled; Charoy, François

    e-Government organisations are facilitated and conducted using workflow management systems. Role-based access control (RBAC) is recognised as an efficient access control model for large organisations. The application of RBAC in workflow systems cannot, however, grant permissions to users dynamically while business processes are being executed. We currently observe a move away from predefined strict workflow modelling towards approaches supporting flexibility on the organisational level. One specific approach is that of task delegation. Task delegation is a mechanism that supports organisational flexibility, and ensures delegation of authority in access control systems. In this paper, we propose a Task-oriented Access Control (TAC) model based on RBAC to address these requirements. We aim to reason about task from organisational perspectives and resources perspectives to analyse and specify authorisation constraints. Moreover, we present a fine grained access control protocol to support delegation based on the TAC model.

  5. Quantum decay model with exact explicit analytical solution

    NASA Astrophysics Data System (ADS)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  6. Visual Analytics 101

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.

    This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics

  7. Analytical halo model of galactic conformity

    NASA Astrophysics Data System (ADS)

    Pahwa, Isha; Paranjape, Aseem

    2017-09-01

    We present a fully analytical halo model of colour-dependent clustering that incorporates the effects of galactic conformity in a halo occupation distribution framework. The model, based on our previous numerical work, describes conformity through a correlation between the colour of a galaxy and the concentration of its parent halo, leading to a correlation between central and satellite galaxy colours at fixed halo mass. The strength of the correlation is set by a tunable 'group quenching efficiency', and the model can separately describe group-level correlations between galaxy colour (1-halo conformity) and large-scale correlations induced by assembly bias (2-halo conformity). We validate our analytical results using clustering measurements in mock galaxy catalogues, finding that the model is accurate at the 10-20 per cent level for a wide range of luminosities and length-scales. We apply the formalism to interpret the colour-dependent clustering of galaxies in the Sloan Digital Sky Survey (SDSS). We find good overall agreement between the data and a model that has 1-halo conformity at a level consistent with previous results based on an SDSS group catalogue, although the clustering data require satellites to be redder than suggested by the group catalogue. Within our modelling uncertainties, however, we do not find strong evidence of 2-halo conformity driven by assembly bias in SDSS clustering.

  8. Improved partition equilibrium model for predicting analyte response in electrospray ionization mass spectrometry.

    PubMed

    Du, Lihong; White, Robert L

    2009-02-01

    A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.

  9. Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex.

    PubMed

    Ulloa, Antonio; Horwitz, Barry

    2016-01-01

    A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were "non-task-specific" (NS) neurons that served as noise generators to "task-specific" neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional

  10. Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex

    PubMed Central

    Ulloa, Antonio; Horwitz, Barry

    2016-01-01

    A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were “non-task-specific” (NS) neurons that served as noise generators to “task-specific” neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional

  11. Evaluating child welfare policies with decision-analytic simulation models.

    PubMed

    Goldhaber-Fiebert, Jeremy D; Bailey, Stephanie L; Hurlburt, Michael S; Zhang, Jinjin; Snowden, Lonnie R; Wulczyn, Fred; Landsverk, John; Horwitz, Sarah M

    2012-11-01

    The objective was to demonstrate decision-analytic modeling in support of Child Welfare policymakers considering implementing evidence-based interventions. Outcomes included permanency (e.g., adoptions) and stability (e.g., foster placement changes). Analyses of a randomized trial of KEEP-a foster parenting intervention-and NSCAW-1 estimated placement change rates and KEEP's effects. A microsimulation model generalized these findings to other Child Welfare systems. The model projected that KEEP could increase permanency and stability, identifying strategies targeting higher-risk children and geographical regions that achieve benefits efficiently. Decision-analytic models enable planners to gauge the value of potential implementations.

  12. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    PubMed Central

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  13. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    PubMed

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  14. VAST Challenge 2016: Streaming Visual Analytics

    DTIC Science & Technology

    2016-10-25

    understand rapidly evolving situations. To support such tasks, visual analytics solutions must move well beyond systems that simply provide real-time...received. Mini-Challenge 1: Design Challenge Mini-Challenge 1 focused on systems to support security and operational analytics at the Euybia...Challenge 1 was to solicit novel approaches for streaming visual analytics that push the boundaries for what constitutes a visual analytics system , and to

  15. Improving a complex finite-difference ground water flow model through the use of an analytic element screening model

    USGS Publications Warehouse

    Hunt, R.J.; Anderson, M.P.; Kelson, V.A.

    1998-01-01

    This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.

  16. Determining passive cooling limits in CPV using an analytical thermal model

    NASA Astrophysics Data System (ADS)

    Gualdi, Federico; Arenas, Osvaldo; Vossier, Alexis; Dollet, Alain; Aimez, Vincent; Arès, Richard

    2013-09-01

    We propose an original thermal analytical model aiming to predict the practical limits of passive cooling systems for high concentration photovoltaic modules. The analytical model is described and validated by comparison with a commercial 3D finite element model. The limiting performances of flat plate cooling systems in natural convection are then derived and discussed.

  17. Climate Analytics as a Service. Chapter 11

    NASA Technical Reports Server (NTRS)

    Schnase, John L.

    2016-01-01

    Exascale computing, big data, and cloud computing are driving the evolution of large-scale information systems toward a model of data-proximal analysis. In response, we are developing a concept of climate analytics as a service (CAaaS) that represents a convergence of data analytics and archive management. With this approach, high-performance compute-storage implemented as an analytic system is part of a dynamic archive comprising both static and computationally realized objects. It is a system whose capabilities are framed as behaviors over a static data collection, but where queries cause results to be created, not found and retrieved. Those results can be the product of a complex analysis, but, importantly, they also can be tailored responses to the simplest of requests. NASA's MERRA Analytic Service and associated Climate Data Services API provide a real-world example of climate analytics delivered as a service in this way. Our experiences reveal several advantages to this approach, not the least of which is orders-of-magnitude time reduction in the data assembly task common to many scientific workflows.

  18. SINGLE PHASE ANALYTICAL MODELS FOR TERRY TURBINE NOZZLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling

    All BWR RCIC (Reactor Core Isolation Cooling) systems and PWR AFW (Auxiliary Feed Water) systems use Terry turbine, which is composed of the wheel with turbine buckets and several groups of fixed nozzles and reversing chambers inside the turbine casing. The inlet steam is accelerated through the turbine nozzle and impacts on the wheel buckets, generating work to drive the RCIC pump. As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC systems in Fukushima accidents and extend BWR RCIC and PWR AFW operational range and flexibility, mechanistic models for the Terry turbine, based on Sandiamore » National Laboratories’ original work, has been developed and implemented in the RELAP-7 code to simulate the RCIC system. RELAP-7 is a new reactor system code currently under development with the funding support from U.S. Department of Energy. The RELAP-7 code is a fully implicit code and the preconditioned Jacobian-free Newton-Krylov (JFNK) method is used to solve the discretized nonlinear system. This paper presents a set of analytical models for simulating the flow through the Terry turbine nozzles when inlet fluid is pure steam. The implementation of the models into RELAP-7 will be briefly discussed. In the Sandia model, the turbine bucket inlet velocity is provided according to a reduced-order model, which was obtained from a large number of CFD simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine bucket inlet. The models include both adiabatic expansion process inside the nozzle and free expansion process out of the nozzle to reach the ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input conditions for the Terry Turbine rotor model. The nozzle analytical models were validated with experimental data

  19. A simple analytical aerodynamic model of Langley Winged-Cone Aerospace Plane concept

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.

    1994-01-01

    A simple three DOF analytical aerodynamic model of the Langley Winged-Coned Aerospace Plane concept is presented in a form suitable for simulation, trajectory optimization, and guidance and control studies. The analytical model is especially suitable for methods based on variational calculus. Analytical expressions are presented for lift, drag, and pitching moment coefficients from subsonic to hypersonic Mach numbers and angles of attack up to +/- 20 deg. This analytical model has break points at Mach numbers of 1.0, 1.4, 4.0, and 6.0. Across these Mach number break points, the lift, drag, and pitching moment coefficients are made continuous but their derivatives are not. There are no break points in angle of attack. The effect of control surface deflection is not considered. The present analytical model compares well with the APAS calculations and wind tunnel test data for most angles of attack and Mach numbers.

  20. Generalized model of electromigration with 1:1 (analyte:selector) complexation stoichiometry: part I. Theory.

    PubMed

    Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav

    2015-03-06

    The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Project Summary. ANALYTICAL ELEMENT MODELING OF COASTAL AQUIFERS

    EPA Science Inventory

    Four topics were studied concerning the modeling of groundwater flow in coastal aquifers with analytic elements: (1) practical experience was obtained by constructing a groundwater model of the shallow aquifers below the Delmarva Peninsula USA using the commercial program MVAEM; ...

  2. Modeling Analyte Transport and Capture in Porous Bead Sensors

    PubMed Central

    Chou, Jie; Lennart, Alexis; Wong, Jorge; Ali, Mehnaaz F.; Floriano, Pierre N.; Christodoulides, Nicolaos; Camp, James; McDevitt, John T.

    2013-01-01

    Porous agarose microbeads, with high surface to volume ratios and high binding densities, are attracting attention as highly sensitive, affordable sensor elements for a variety of high performance bioassays. While such polymer microspheres have been extensively studied and reported on previously and are now moving into real-world clinical practice, very little work has been completed to date to model the convection, diffusion, and binding kinetics of soluble reagents captured within such fibrous networks. Here, we report the development of a three-dimensional computational model and provide the initial evidence for its agreement with experimental outcomes derived from the capture and detection of representative protein and genetic biomolecules in 290μm porous beads. We compare this model to antibody-mediated capture of C-reactive protein and bovine serum albumin, along with hybridization of oligonucleotide sequences to DNA probes. These results suggest that due to the porous interior of the agarose bead, internal analyte transport is both diffusion- and convection-based, and regardless of the nature of analyte, the bead interiors reveal an interesting trickle of convection-driven internal flow. Based on this model, the internal to external flow rate ratio is found to be in the range of 1:3100 to 1:170 for beads with agarose concentration ranging from 0.5% to 8% for the sensor ensembles here studied. Further, both model and experimental evidence suggest that binding kinetics strongly affect analyte distribution of captured reagents within the beads. These findings reveal that high association constants create a steep moving boundary in which unbound analytes are held back at the periphery of the bead sensor. Low association constants create a more shallow moving boundary in which unbound analytes diffuse further into the bead before binding. These models agree with experimental evidence and thus serve as a new tool set for the study of bio-agent transport processes

  3. Development of task network models of human performance in microgravity

    NASA Technical Reports Server (NTRS)

    Diaz, Manuel F.; Adam, Susan

    1992-01-01

    This paper discusses the utility of task-network modeling for quantifying human performance variability in microgravity. The data are gathered for: (1) improving current methodologies for assessing human performance and workload in the operational space environment; (2) developing tools for assessing alternative system designs; and (3) developing an integrated set of methodologies for the evaluation of performance degradation during extended duration spaceflight. The evaluation entailed an analysis of the Remote Manipulator System payload-grapple task performed on many shuttle missions. Task-network modeling can be used as a tool for assessing and enhancing human performance in man-machine systems, particularly for modeling long-duration manned spaceflight. Task-network modeling can be directed toward improving system efficiency by increasing the understanding of basic capabilities of the human component in the system and the factors that influence these capabilities.

  4. Modeling of Depth Cue Integration in Manual Control Tasks

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Kaiser, Mary K.; Davis, Wendy

    2003-01-01

    Psychophysical research has demonstrated that human observers utilize a variety of visual cues to form a perception of three-dimensional depth. However, most of these studies have utilized a passive judgement paradigm, and failed to consider depth-cue integration as a dynamic and task-specific process. In the current study, we developed and experimentally validated a model of manual control of depth that examines how two potential cues (stereo disparity and relative size) are utilized in both first- and second-order active depth control tasks. We found that stereo disparity plays the dominate role for determining depth position, while relative size dominates perception of depth velocity. Stereo disparity also plays a reduced role when made less salient (i.e., when viewing distance is increased). Manual control models predict that position information is sufficient for first-order control tasks, while velocity information is required to perform a second-order control task. Thus, the rules for depth-cue integration in active control tasks are dependent on both task demands and cue quality.

  5. Rethinking of the heuristic-analytic dual process theory: a comment on Wada and Nittono (2004) and the reasoning process in the Wason selection task.

    PubMed

    Cardaci, Maurizio; Misuraca, Raffaella

    2005-08-01

    This paper raises some methodological problems in the dual process explanation provided by Wada and Nittono for their 2004 results using the Wason selection task. We maintain that the Nittono rethinking approach is weak and that it should be refined to grasp better the evidence of analytic processes.

  6. Holistic rubric vs. analytic rubric for measuring clinical performance levels in medical students.

    PubMed

    Yune, So Jung; Lee, Sang Yeoup; Im, Sun Ju; Kam, Bee Sung; Baek, Sun Yong

    2018-06-05

    Task-specific checklists, holistic rubrics, and analytic rubrics are often used for performance assessments. We examined what factors evaluators consider important in holistic scoring of clinical performance assessment, and compared the usefulness of applying holistic and analytic rubrics respectively, and analytic rubrics in addition to task-specific checklists based on traditional standards. We compared the usefulness of a holistic rubric versus an analytic rubric in effectively measuring the clinical skill performances of 126 third-year medical students who participated in a clinical performance assessment conducted by Pusan National University School of Medicine. We conducted a questionnaire survey of 37 evaluators who used all three evaluation methods-holistic rubric, analytic rubric, and task-specific checklist-for each student. The relationship between the scores on the three evaluation methods was analyzed using Pearson's correlation. Inter-rater agreement was analyzed by Kappa index. The effect of holistic and analytic rubric scores on the task-specific checklist score was analyzed using multiple regression analysis. Evaluators perceived accuracy and proficiency to be major factors in objective structured clinical examinations evaluation, and history taking and physical examination to be major factors in clinical performance examinations evaluation. Holistic rubric scores were highly related to the scores of the task-specific checklist and analytic rubric. Relatively low agreement was found in clinical performance examinations compared to objective structured clinical examinations. Meanwhile, the holistic and analytic rubric scores explained 59.1% of the task-specific checklist score in objective structured clinical examinations and 51.6% in clinical performance examinations. The results show the usefulness of holistic and analytic rubrics in clinical performance assessment, which can be used in conjunction with task-specific checklists for more efficient

  7. Model and Analytic Processes for Export License Assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Sandra E.; Whitney, Paul D.; Weimar, Mark R.

    2011-09-29

    This paper represents the Department of Energy Office of Nonproliferation Research and Development (NA-22) Simulations, Algorithms and Modeling (SAM) Program's first effort to identify and frame analytical methods and tools to aid export control professionals in effectively predicting proliferation intent; a complex, multi-step and multi-agency process. The report focuses on analytical modeling methodologies that alone, or combined, may improve the proliferation export control license approval process. It is a follow-up to an earlier paper describing information sources and environments related to international nuclear technology transfer. This report describes the decision criteria used to evaluate modeling techniques and tools to determinemore » which approaches will be investigated during the final 2 years of the project. The report also details the motivation for why new modeling techniques and tools are needed. The analytical modeling methodologies will enable analysts to evaluate the information environment for relevance to detecting proliferation intent, with specific focus on assessing risks associated with transferring dual-use technologies. Dual-use technologies can be used in both weapons and commercial enterprises. A decision-framework was developed to evaluate which of the different analytical modeling methodologies would be most appropriate conditional on the uniqueness of the approach, data availability, laboratory capabilities, relevance to NA-22 and Office of Arms Control and Nonproliferation (NA-24) research needs and the impact if successful. Modeling methodologies were divided into whether they could help micro-level assessments (e.g., help improve individual license assessments) or macro-level assessment. Macro-level assessment focuses on suppliers, technology, consumers, economies, and proliferation context. Macro-level assessment technologies scored higher in the area of uniqueness because less work has been done at the macro level. An

  8. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    PubMed Central

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-01-01

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697

  9. Optimizing Spectral CT Parameters for Material Classification Tasks

    PubMed Central

    Rigie, D. S.; La Rivière, P. J.

    2017-01-01

    In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430

  10. Optimizing spectral CT parameters for material classification tasks

    NASA Astrophysics Data System (ADS)

    Rigie, D. S.; La Rivière, P. J.

    2016-06-01

    In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.

  11. Co-Constructional Task Analysis: Moving beyond Adult-Based Models to Assess Young Children's Task Performance

    ERIC Educational Resources Information Center

    Lee, Scott Weng Fai

    2013-01-01

    The assessment of young children's thinking competence in task performances has typically followed the novice-to-expert regimen involving models of strategies that adults use when engaged in cognitive tasks such as problem-solving and decision-making. Socio-constructivists argue for a balanced pedagogical approach between the adult and child that…

  12. Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics.

    PubMed

    Stolper, Charles D; Perer, Adam; Gotz, David

    2014-12-01

    As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.

  13. Understanding Education Involving Geovisual Analytics

    ERIC Educational Resources Information Center

    Stenliden, Linnea

    2013-01-01

    Handling the vast amounts of data and information available in contemporary society is a challenge. Geovisual Analytics provides technology designed to increase the effectiveness of information interpretation and analytical task solving. To date, little attention has been paid to the role such tools can play in education and to the extent to which…

  14. The Ophidia Stack: Toward Large Scale, Big Data Analytics Experiments for Climate Change

    NASA Astrophysics Data System (ADS)

    Fiore, S.; Williams, D. N.; D'Anca, A.; Nassisi, P.; Aloisio, G.

    2015-12-01

    The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in multiple domains (e.g. climate change). It provides a "datacube-oriented" framework responsible for atomically processing and manipulating scientific datasets, by providing a common way to run distributive tasks on large set of data fragments (chunks). Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes. The project relies on a strong background on high performance database management and On-Line Analytical Processing (OLAP) systems to manage large scientific datasets. The Ophidia analytics platform provides several data operators to manipulate datacubes (about 50), and array-based primitives (more than 100) to perform data analysis on large scientific data arrays. To address interoperability, Ophidia provides multiple server interfaces (e.g. OGC-WPS). From a client standpoint, a Python interface enables the exploitation of the framework into Python-based eco-systems/applications (e.g. IPython) and the straightforward adoption of a strong set of related libraries (e.g. SciPy, NumPy). The talk will highlight a key feature of the Ophidia framework stack: the "Analytics Workflow Management System" (AWfMS). The Ophidia AWfMS coordinates, orchestrates, optimises and monitors the execution of multiple scientific data analytics and visualization tasks, thus supporting "complex analytics experiments". Some real use cases related to the CMIP5 experiment will be discussed. In particular, with regard to the "Climate models intercomparison data analysis" case study proposed in the EU H2020 INDIGO-DataCloud project, workflows related to (i) anomalies, (ii) trend, and (iii) climate change signal analysis will be presented. Such workflows will be distributed across multiple sites - according to the

  15. An analytical model of memristors in plants

    PubMed Central

    Markin, Vladislav S; Volkov, Alexander G; Chua, Leon

    2014-01-01

    The memristor, a resistor with memory, was postulated by Chua in 1971 and the first solid-state memristor was built in 2008. Recently, we found memristors in vivo in plants. Here we propose a simple analytical model of 2 types of memristors that can be found within plants. The electrostimulation of plants by bipolar periodic waves induces electrical responses in the Aloe vera and Mimosa pudica with fingerprints of memristors. Memristive properties of the Aloe vera and Mimosa pudica are linked to the properties of voltage gated K+ ion channels. The potassium channel blocker TEACl transform plant memristors to conventional resistors. The analytical model of a memristor with a capacitor connected in parallel exhibits different characteristic behavior at low and high frequency of applied voltage, which is the same as experimental data obtained by cyclic voltammetry in vivo. PMID:25482769

  16. Useful measures and models for analytical quality management in medical laboratories.

    PubMed

    Westgard, James O

    2016-02-01

    The 2014 Milan Conference "Defining analytical performance goals 15 years after the Stockholm Conference" initiated a new discussion of issues concerning goals for precision, trueness or bias, total analytical error (TAE), and measurement uncertainty (MU). Goal-setting models are critical for analytical quality management, along with error models, quality-assessment models, quality-planning models, as well as comprehensive models for quality management systems. There are also critical underlying issues, such as an emphasis on MU to the possible exclusion of TAE and a corresponding preference for separate precision and bias goals instead of a combined total error goal. This opinion recommends careful consideration of the differences in the concepts of accuracy and traceability and the appropriateness of different measures, particularly TAE as a measure of accuracy and MU as a measure of traceability. TAE is essential to manage quality within a medical laboratory and MU and trueness are essential to achieve comparability of results across laboratories. With this perspective, laboratory scientists can better understand the many measures and models needed for analytical quality management and assess their usefulness for practical applications in medical laboratories.

  17. Variations on Debris Disks. IV. An Improved Analytical Model for Collisional Cascades

    NASA Astrophysics Data System (ADS)

    Kenyon, Scott J.; Bromley, Benjamin C.

    2017-04-01

    We derive a new analytical model for the evolution of a collisional cascade in a thin annulus around a single central star. In this model, r max the size of the largest object changes with time, {r}\\max \\propto {t}-γ , with γ ≈ 0.1-0.2. Compared to standard models where r max is constant in time, this evolution results in a more rapid decline of M d , the total mass of solids in the annulus, and L d , the luminosity of small particles in the annulus: {M}d\\propto {t}-(γ +1) and {L}d\\propto {t}-(γ /2+1). We demonstrate that the analytical model provides an excellent match to a comprehensive suite of numerical coagulation simulations for annuli at 1 au and at 25 au. If the evolution of real debris disks follows the predictions of the analytical or numerical models, the observed luminosities for evolved stars require up to a factor of two more mass than predicted by previous analytical models.

  18. Comparison of thermal analytic model with experimental test results for 30-sentimeter-diameter engineering model mercury ion thruster

    NASA Technical Reports Server (NTRS)

    Oglebay, J. C.

    1977-01-01

    A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.

  19. System identification of analytical models of damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J.-S.; Chen, S.-Y.; Berman, A.

    1984-01-01

    A procedure is presented for identifying linear nonproportionally damped system. The system damping is assumed to be representable by a real symmetric matrix. Analytical mass, stiffness and damping matrices which constitute an approximate representation of the system are assumed to be available. Given also are an incomplete set of measured natural frequencies, damping ratios and complex mode shapes of the structure, normally obtained from test data. A method is developed to find the smallest changes in the analytical model so that the improved model can exactly predict the measured modal parameters. The present method uses the orthogonality relationship to improve mass and damping matrices and the dynamic equation to find the improved stiffness matrix.

  20. Exploring SMBH assembly with semi-analytic modelling

    NASA Astrophysics Data System (ADS)

    Ricarte, Angelo; Natarajan, Priyamvada

    2018-02-01

    We develop a semi-analytic model to explore different prescriptions of supermassive black hole (SMBH) fuelling. This model utilizes a merger-triggered burst mode in concert with two possible implementations of a long-lived steady mode for assembling the mass of the black hole in a galactic nucleus. We improve modelling of the galaxy-halo connection in order to more realistically determine the evolution of a halo's velocity dispersion. We use four model variants to explore a suite of observables: the M•-σ relation, mass functions of both the overall and broad-line quasar population, and luminosity functions as a function of redshift. We find that `downsizing' is a natural consequence of our improved velocity dispersion mappings, and that high-mass SMBHs assemble earlier than low-mass SMBHs. The burst mode of fuelling is sufficient to explain the assembly of SMBHs to z = 2, but an additional steady mode is required to both assemble low-mass SMBHs and reproduce the low-redshift luminosity function. We discuss in detail the trade-offs in matching various observables and the interconnected modelling components that govern them. As a result, we demonstrate the utility as well as the limitations of these semi-analytic techniques.

  1. Review of analytical models to stream depletion induced by pumping: Guide to model selection

    NASA Astrophysics Data System (ADS)

    Huang, Ching-Sheng; Yang, Tao; Yeh, Hund-Der

    2018-06-01

    Stream depletion due to groundwater extraction by wells may cause impact on aquatic ecosystem in streams, conflict over water rights, and contamination of water from irrigation wells near polluted streams. A variety of studies have been devoted to addressing the issue of stream depletion, but a fundamental framework for analytical modeling developed from aquifer viewpoint has not yet been found. This review shows key differences in existing models regarding the stream depletion problem and provides some guidelines for choosing a proper analytical model in solving the problem of concern. We introduce commonly used models composed of flow equations, boundary conditions, well representations and stream treatments for confined, unconfined, and leaky aquifers. They are briefly evaluated and classified according to six categories of aquifer type, flow dimension, aquifer domain, stream representation, stream channel geometry, and well type. Finally, we recommend promising analytical approaches that can solve stream depletion problem in reality with aquifer heterogeneity and irregular geometry of stream channel. Several unsolved stream depletion problems are also recommended.

  2. Analytical Model for Thermal Elastoplastic Stresses of Functionally Graded Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, P. C.; Chen, G.; Liu, L. S.

    2008-02-15

    A modification analytical model is presented for the thermal elastoplastic stresses of functionally graded materials subjected to thermal loading. The presented model follows the analytical scheme presented by Y. L. Shen and S. Suresh [6]. In the present model, the functionally graded materials are considered as multilayered materials. Each layer consists of metal and ceramic with different volume fraction. The ceramic layer and the FGM interlayers are considered as elastic brittle materials. The metal layer is considered as elastic-perfectly plastic ductile materials. Closed-form solutions for different characteristic temperature for thermal loading are presented as a function of the structure geometriesmore » and the thermomechanical properties of the materials. A main advance of the present model is that the possibility of the initial and spread of plasticity from the two sides of the ductile layers taken into account. Comparing the analytical results with the results from the finite element analysis, the thermal stresses and deformation from the present model are in good agreement with the numerical ones.« less

  3. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  4. Modeling stimulus variation in three common implicit attitude tasks.

    PubMed

    Wolsiefer, Katie; Westfall, Jacob; Judd, Charles M

    2017-08-01

    We explored the consequences of ignoring the sampling variation due to stimuli in the domain of implicit attitudes. A large literature in psycholinguistics has examined the statistical treatment of random stimulus materials, but the recommendations from this literature have not been applied to the social psychological literature on implicit attitudes. This is partly because of inherent complications in applying crossed random-effect models to some of the most common implicit attitude tasks, and partly because no work to date has demonstrated that random stimulus variation is in fact consequential in implicit attitude measurement. We addressed this problem by laying out statistically appropriate and practically feasible crossed random-effect models for three of the most commonly used implicit attitude measures-the Implicit Association Test, affect misattribution procedure, and evaluative priming task-and then applying these models to large datasets (average N = 3,206) that assess participants' implicit attitudes toward race, politics, and self-esteem. We showed that the test statistics from the traditional analyses are substantially (about 60 %) inflated relative to the more-appropriate analyses that incorporate stimulus variation. Because all three tasks used the same stimulus words and faces, we could also meaningfully compare the relative contributions of stimulus variation across the tasks. In an appendix, we give syntax in R, SAS, and SPSS for fitting the recommended crossed random-effects models to data from all three tasks, as well as instructions on how to structure the data file.

  5. An analytical model of prominence dynamics

    NASA Astrophysics Data System (ADS)

    Routh, Swati; Saha, Snehanshu; Bhat, Atul; Sundar, M. N.

    2018-01-01

    Solar prominences are magnetic structures incarcerating cool and dense gas in an otherwise hot solar corona. Prominences can be categorized as quiescent and active. Their origin and the presence of cool gas (∼104 K) within the hot (∼106K) solar corona remains poorly understood. The structure and dynamics of solar prominences was investigated in a large number of observational and theoretical (both analytical and numerical) studies. In this paper, an analytic model of quiescent solar prominence is developed and used to demonstrate that the prominence velocity increases exponentially, which means that some gas falls downward towards the solar surface, and that Alfvén waves are naturally present in the solar prominences. These theoretical predictions are consistent with the current observational data of solar quiescent prominences.

  6. Visual-search models for location-known detection tasks

    NASA Astrophysics Data System (ADS)

    Gifford, H. C.; Karbaschi, Z.; Banerjee, K.; Das, M.

    2017-03-01

    Lesion-detection studies that analyze a fixed target position are generally considered predictive of studies involving lesion search, but the extent of the correlation often goes untested. The purpose of this work was to develop a visual-search (VS) model observer for location-known tasks that, coupled with previous work on localization tasks, would allow efficient same-observer assessments of how search and other task variations can alter study outcomes. The model observer featured adjustable parameters to control the search radius around the fixed lesion location and the minimum separation between suspicious locations. Comparisons were made against human observers, a channelized Hotelling observer and a nonprewhitening observer with eye filter in a two-alternative forced-choice study with simulated lumpy background images containing stationary anatomical and quantum noise. These images modeled single-pinhole nuclear medicine scans with different pinhole sizes. When the VS observer's search radius was optimized with training images, close agreement was obtained with human-observer results. Some performance differences between the humans could be explained by varying the model observer's separation parameter. The range of optimal pinhole sizes identified by the VS observer was in agreement with the range determined with the channelized Hotelling observer.

  7. Value Reappraisal as a Conceptual Model for Task-Value Interventions

    ERIC Educational Resources Information Center

    Acee, Taylor W.; Weinstein, Claire Ellen; Hoang, Theresa V.; Flaggs, Darolyn A.

    2018-01-01

    We discuss task-value interventions as one type of relevance intervention and propose a process model of value reappraisal whereby task-value interventions elicit cognitive-affective responses that lead to attitude change and in turn affect academic outcomes. The model incorporates a metacognitive component showing that students can intentionally…

  8. Interaction Junk: User Interaction-Based Evaluation of Visual Analytic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; North, Chris

    2012-10-14

    With the growing need for visualization to aid users in understanding large, complex datasets, the ability for users to interact and explore these datasets is critical. As visual analytic systems have advanced to leverage powerful computational models and data analytics capabilities, the modes by which users engage and interact with the information are limited. Often, users are taxed with directly manipulating parameters of these models through traditional GUIs (e.g., using sliders to directly manipulate the value of a parameter). However, the purpose of user interaction in visual analytic systems is to enable visual data exploration – where users can focusmore » on their task, as opposed to the tool or system. As a result, users can engage freely in data exploration and decision-making, for the purpose of gaining insight. In this position paper, we discuss how evaluating visual analytic systems can be approached through user interaction analysis, where the goal is to minimize the cognitive translation between the visual metaphor and the mode of interaction (i.e., reducing the “Interactionjunk”). We motivate this concept through a discussion of traditional GUIs used in visual analytics for direct manipulation of model parameters, and the importance of designing interactions the support visual data exploration.« less

  9. Heavy vehicle driver workload assessment. Task 3, task analysis data collection

    DOT National Transportation Integrated Search

    This technical report consists of a collection of task analytic data to support heavy vehicle driver workload assessment and protocol development. Data were collected from professional drivers to provide insights into the following issues: the meanin...

  10. LitPathExplorer: a confidence-based visual text analytics tool for exploring literature-enriched pathway models.

    PubMed

    Soto, Axel J; Zerva, Chrysoula; Batista-Navarro, Riza; Ananiadou, Sophia

    2018-04-15

    Pathway models are valuable resources that help us understand the various mechanisms underpinning complex biological processes. Their curation is typically carried out through manual inspection of published scientific literature to find information relevant to a model, which is a laborious and knowledge-intensive task. Furthermore, models curated manually cannot be easily updated and maintained with new evidence extracted from the literature without automated support. We have developed LitPathExplorer, a visual text analytics tool that integrates advanced text mining, semi-supervised learning and interactive visualization, to facilitate the exploration and analysis of pathway models using statements (i.e. events) extracted automatically from the literature and organized according to levels of confidence. LitPathExplorer supports pathway modellers and curators alike by: (i) extracting events from the literature that corroborate existing models with evidence; (ii) discovering new events which can update models; and (iii) providing a confidence value for each event that is automatically computed based on linguistic features and article metadata. Our evaluation of event extraction showed a precision of 89% and a recall of 71%. Evaluation of our confidence measure, when used for ranking sampled events, showed an average precision ranging between 61 and 73%, which can be improved to 95% when the user is involved in the semi-supervised learning process. Qualitative evaluation using pair analytics based on the feedback of three domain experts confirmed the utility of our tool within the context of pathway model exploration. LitPathExplorer is available at http://nactem.ac.uk/LitPathExplorer_BI/. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online.

  11. Laparoscopic Common Bile Duct Exploration Four-Task Training Model: Construct Validity

    PubMed Central

    Otaño, Natalia; Rodríguez, Omaira; Sánchez, Renata; Benítez, Gustavo; Schweitzer, Michael

    2012-01-01

    Background: Training models in laparoscopic surgery allow the surgical team to practice procedures in a safe environment. We have proposed the use of a 4-task, low-cost inert model to practice critical steps of laparoscopic common bile duct exploration. Methods: The performance of 3 groups with different levels of expertise in laparoscopic surgery, novices (A), intermediates (B), and experts (C), was evaluated using a low-cost inert model in the following tasks: (1) intraoperative cholangiography catheter insertion, (2) transcystic exploration, (3) T-tube placement, and (4) choledochoscope management. Kruskal-Wallis and Mann-Whitney tests were used to identify differences among the groups. Results: A total of 14 individuals were evaluated: 5 novices (A), 5 intermediates (B), and 4 experts (C). The results involving intraoperative cholangiography catheter insertion were similar among the 3 groups. As for the other tasks, the expert had better results than the other 2, in which no significant differences occurred. The proposed model is able to discriminate among individuals with different levels of expertise, indicating that the abilities that the model evaluates are relevant in the surgeon's performance in CBD exploration. Conclusions: Construct validity for tasks 2 and 3 was demonstrated. However, task 1 was no capable of distinguishing between groups, and task 4 was not statistically validated. PMID:22906323

  12. Analytical modeling and experimental validation of a magnetorheological mount

    NASA Astrophysics Data System (ADS)

    Nguyen, The; Ciocanel, Constantin; Elahinia, Mohammad

    2009-03-01

    Magnetorheological (MR) fluid has been increasingly researched and applied in vibration isolation devices. To date, the suspension system of several high performance vehicles has been equipped with MR fluid based dampers and research is ongoing to develop MR fluid based mounts for engine and powertrain isolation. MR fluid based devices have received attention due to the MR fluid's capability to change its properties in the presence of a magnetic field. This characteristic places MR mounts in the class of semiactive isolators making them a desirable substitution for the passive hydraulic mounts. In this research, an analytical model of a mixed-mode MR mount was constructed. The magnetorheological mount employs flow (valve) mode and squeeze mode. Each mode is powered by an independent electromagnet, so one mode does not affect the operation of the other. The analytical model was used to predict the performance of the MR mount with different sets of parameters. Furthermore, in order to produce the actual prototype, the analytical model was used to identify the optimal geometry of the mount. The experimental phase of this research was carried by fabricating and testing the actual MR mount. The manufactured mount was tested to evaluate the effectiveness of each mode individually and in combination. The experimental results were also used to validate the ability of the analytical model in predicting the response of the MR mount. Based on the observed response of the mount a suitable controller can be designed for it. However, the control scheme is not addressed in this study.

  13. Analytical dose modeling for preclinical proton irradiation of millimetric targets.

    PubMed

    Vanstalle, Marie; Constanzo, Julie; Karakaya, Yusuf; Finck, Christian; Rousseau, Marc; Brasse, David

    2018-01-01

    Due to the considerable development of proton radiotherapy, several proton platforms have emerged to irradiate small animals in order to study the biological effectiveness of proton radiation. A dedicated analytical treatment planning tool was developed in this study to accurately calculate the delivered dose given the specific constraints imposed by the small dimensions of the irradiated areas. The treatment planning system (TPS) developed in this study is based on an analytical formulation of the Bragg peak and uses experimental range values of protons. The method was validated after comparison with experimental data from the literature and then compared to Monte Carlo simulations conducted using Geant4. Three examples of treatment planning, performed with phantoms made of water targets and bone-slab insert, were generated with the analytical formulation and Geant4. Each treatment planning was evaluated using dose-volume histograms and gamma index maps. We demonstrate the value of the analytical function for mouse irradiation, which requires a targeting accuracy of 0.1 mm. Using the appropriate database, the analytical modeling limits the errors caused by misestimating the stopping power. For example, 99% of a 1-mm tumor irradiated with a 24-MeV beam receives the prescribed dose. The analytical dose deviations from the prescribed dose remain within the dose tolerances stated by report 62 of the International Commission on Radiation Units and Measurements for all tested configurations. In addition, the gamma index maps show that the highly constrained targeting accuracy of 0.1 mm for mouse irradiation leads to a significant disagreement between Geant4 and the reference. This simulated treatment planning is nevertheless compatible with a targeting accuracy exceeding 0.2 mm, corresponding to rat and rabbit irradiations. Good dose accuracy for millimetric tumors is achieved with the analytical calculation used in this work. These volume sizes are typical in mouse

  14. Analytical results for a stochastic model of gene expression with arbitrary partitioning of proteins

    NASA Astrophysics Data System (ADS)

    Tschirhart, Hugo; Platini, Thierry

    2018-05-01

    In biophysics, the search for analytical solutions of stochastic models of cellular processes is often a challenging task. In recent work on models of gene expression, it was shown that a mapping based on partitioning of Poisson arrivals (PPA-mapping) can lead to exact solutions for previously unsolved problems. While the approach can be used in general when the model involves Poisson processes corresponding to creation or degradation, current applications of the method and new results derived using it have been limited to date. In this paper, we present the exact solution of a variation of the two-stage model of gene expression (with time dependent transition rates) describing the arbitrary partitioning of proteins. The methodology proposed makes full use of the PPA-mapping by transforming the original problem into a new process describing the evolution of three biological switches. Based on a succession of transformations, the method leads to a hierarchy of reduced models. We give an integral expression of the time dependent generating function as well as explicit results for the mean, variance, and correlation function. Finally, we discuss how results for time dependent parameters can be extended to the three-stage model and used to make inferences about models with parameter fluctuations induced by hidden stochastic variables.

  15. Modeling User Behavior in Computer Learning Tasks.

    ERIC Educational Resources Information Center

    Mantei, Marilyn M.

    Model building techniques from Artifical Intelligence and Information-Processing Psychology are applied to human-computer interface tasks to evaluate existing interfaces and suggest new and better ones. The model is in the form of an augmented transition network (ATN) grammar which is built by applying grammar induction heuristics on a sequential…

  16. Health Informatics for Neonatal Intensive Care Units: An Analytical Modeling Perspective

    PubMed Central

    Mench-Bressan, Nadja; McGregor, Carolyn; Pugh, James Edward

    2015-01-01

    The effective use of data within intensive care units (ICUs) has great potential to create new cloud-based health analytics solutions for disease prevention or earlier condition onset detection. The Artemis project aims to achieve the above goals in the area of neonatal ICUs (NICU). In this paper, we proposed an analytical model for the Artemis cloud project which will be deployed at McMaster Children’s Hospital in Hamilton. We collect not only physiological data but also the infusion pumps data that are attached to NICU beds. Using the proposed analytical model, we predict the amount of storage, memory, and computation power required for the system. Capacity planning and tradeoff analysis would be more accurate and systematic by applying the proposed analytical model in this paper. Numerical results are obtained using real inputs acquired from McMaster Children’s Hospital and a pilot deployment of the system at The Hospital for Sick Children (SickKids) in Toronto. PMID:27170907

  17. Flexible Modeling of Latent Task Structures in Multitask Learning

    DTIC Science & Technology

    2012-06-26

    Flexible Modeling of Latent Task Structures in Multitask Learning Alexandre Passos† apassos@cs.umass.edu Computer Science Department, University of...of Maryland, College Park, MD USA Abstract Multitask learning algorithms are typically designed assuming some fixed, a priori known latent structure...shared by all the tasks. However, it is usually unclear what type of latent task structure is the most ap- propriate for a given multitask learning prob

  18. A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.

    PubMed

    Yao, Yijun; Verginelli, Iason; Suuberg, Eric M

    2017-05-01

    In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.

  19. Heuristic and analytic processing: age trends and associations with cognitive ability and cognitive styles.

    PubMed

    Kokis, Judite V; Macpherson, Robyn; Toplak, Maggie E; West, Richard F; Stanovich, Keith E

    2002-09-01

    Developmental and individual differences in the tendency to favor analytic responses over heuristic responses were examined in children of two different ages (10- and 11-year-olds versus 13-year-olds), and of widely varying cognitive ability. Three tasks were examined that all required analytic processing to override heuristic processing: inductive reasoning, deductive reasoning under conditions of belief bias, and probabilistic reasoning. Significant increases in analytic responding with development were observed on the first two tasks. Cognitive ability was associated with analytic responding on all three tasks. Cognitive style measures such as actively open-minded thinking and need for cognition explained variance in analytic responding on the tasks after variance shared with cognitive ability had been controlled. The implications for dual-process theories of cognition and cognitive development are discussed.

  20. Acute, intermediate intensity exercise, and speed and accuracy in working memory tasks: a meta-analytical comparison of effects.

    PubMed

    McMorris, Terry; Sproule, John; Turner, Anthony; Hale, Beverley J

    2011-03-01

    The purpose of this study was to compare, using meta-analytic techniques, the effect of acute, intermediate intensity exercise on the speed and accuracy of performance of working memory tasks. It was hypothesized that acute, intermediate intensity exercise would have a significant beneficial effect on response time and that effect sizes for response time and accuracy data would differ significantly. Random-effects meta-analysis showed a significant, beneficial effect size for response time, g=-1.41 (p<0.001) but a significant detrimental effect size, g=0.40 (p<0.01), for accuracy. There was a significant difference between effect sizes (Z(diff)=3.85, p<0.001). It was concluded that acute, intermediate intensity exercise has a strong beneficial effect on speed of response in working memory tasks but a low to moderate, detrimental one on accuracy. There was no support for a speed-accuracy trade-off. It was argued that exercise-induced increases in brain concentrations of catecholamines result in faster processing but increases in neural noise may negatively affect accuracy. 2010 Elsevier Inc. All rights reserved.

  1. Task allocation model for minimization of completion time in distributed computer systems

    NASA Astrophysics Data System (ADS)

    Wang, Jai-Ping; Steidley, Carl W.

    1993-08-01

    A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.

  2. Analytical Chemistry Laboratory

    NASA Technical Reports Server (NTRS)

    Anderson, Mark

    2013-01-01

    The Analytical Chemistry and Material Development Group maintains a capability in chemical analysis, materials R&D failure analysis and contamination control. The uniquely qualified staff and facility support the needs of flight projects, science instrument development and various technical tasks, as well as Cal Tech.

  3. The fluid events model: Predicting continuous task action change.

    PubMed

    Radvansky, Gabriel A; D'Mello, Sidney; Abbott, Robert G; Morgan, Brent; Fike, Karl; Tamplin, Andrea K

    2015-01-01

    The fluid events model is a behavioural model aimed at predicting the likelihood that people will change their actions in ongoing, interactive events. From this view, not only are people responding to aspects of the environment, but they are also basing responses on prior experiences. The fluid events model is an attempt to predict the likelihood that people will shift the type of actions taken within an event on a trial-by-trial basis, taking into account both event structure and experience-based factors. The event-structure factors are: (a) changes in event structure, (b) suitability of the current action to the event, and (c) time on task. The experience-based factors are: (a) whether a person has recently shifted actions, (b) how often a person has shifted actions, (c) whether there has been a dip in performance, and (d) a person's propensity to switch actions within the current task. The model was assessed using data from a series of tasks in which a person was producing responses to events. These were two stimulus-driven figure-drawing studies, a conceptually driven decision-making study, and a probability matching study using a standard laboratory task. This analysis predicted trial-by-trial action switching in a person-independent manner with an average accuracy of 70%, which reflects a 34% improvement above chance. In addition, correlations between overall switch rates and actual switch rates were remarkably high (mean r = .98). The experience-based factors played a more major role than the event-structure factors, but this might be attributable to the nature of the tasks.

  4. Predicting the size of individual and group differences on speeded cognitive tasks.

    PubMed

    Chen, Jing; Hale, Sandra; Myerson, Joel

    2007-06-01

    An a priori test of the difference engine model (Myerson, Hale, Zheng, Jenkins, & Widaman, 2003) was conducted using a large, diverse sample of individuals who performed three speeded verbal tasks and three speeded visuospatial tasks. Results demonstrated that, as predicted by the model, the group standard deviation (SD) on any task was proportional to the amount of processing required by that task. Both individual performances as well as those of fast and slow subgroups could be accurately predicted by the model using no free parameters, just an individual or subgroup's mean z-score and the values of theoretical constructs estimated from fits to the group SDs. Taken together, these results are consistent with post hoc analyses reported by Myerson et al. and provide even stronger supporting evidence. In particular, the ability to make quantitative predictions without using any free parameters provides the clearest demonstration to date of the power of an analytic approach on the basis of the difference engine.

  5. Analytical Modeling for the Bending Resonant Frequency of Multilayered Microresonators with Variable Cross-Section

    PubMed Central

    Herrera-May, Agustín L.; Aguilera-Cortés, Luz A.; Plascencia-Mora, Hector; Rodríguez-Morales, Ángel L.; Lu, Jian

    2011-01-01

    Multilayered microresonators commonly use sensitive coating or piezoelectric layers for detection of mass and gas. Most of these microresonators have a variable cross-section that complicates the prediction of their fundamental resonant frequency (generally of the bending mode) through conventional analytical models. In this paper, we present an analytical model to estimate the first resonant frequency and deflection curve of single-clamped multilayered microresonators with variable cross-section. The analytical model is obtained using the Rayleigh and Macaulay methods, as well as the Euler-Bernoulli beam theory. Our model is applied to two multilayered microresonators with piezoelectric excitation reported in the literature. Both microresonators are composed by layers of seven different materials. The results of our analytical model agree very well with those obtained from finite element models (FEMs) and experimental data. Our analytical model can be used to determine the suitable dimensions of the microresonator’s layers in order to obtain a microresonator that operates at a resonant frequency necessary for a particular application. PMID:22164071

  6. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Keller, J.; Wallen, R.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  7. Operator function modeling: Cognitive task analysis, modeling and intelligent aiding in supervisory control systems

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1990-01-01

    The design, implementation, and empirical evaluation of task-analytic models and intelligent aids for operators in the control of complex dynamic systems, specifically aerospace systems, are studied. Three related activities are included: (1) the models of operator decision making in complex and predominantly automated space systems were used and developed; (2) the Operator Function Model (OFM) was used to represent operator activities; and (3) Operator Function Model Expert System (OFMspert), a stand-alone knowledge-based system was developed, that interacts with a human operator in a manner similar to a human assistant in the control of aerospace systems. OFMspert is an architecture for an operator's assistant that uses the OFM as its system and operator knowledge base and a blackboard paradigm of problem solving to dynamically generate expectations about upcoming operator activities and interpreting actual operator actions. An experiment validated the OFMspert's intent inferencing capability and showed that it inferred the intentions of operators in ways comparable to both a human expert and operators themselves. OFMspert was also augmented with control capabilities. An interface allowed the operator to interact with OFMspert, delegating as much or as little control responsibility as the operator chose. With its design based on the OFM, OFMspert's control capabilities were available at multiple levels of abstraction and allowed the operator a great deal of discretion over the amount and level of delegated control. An experiment showed that overall system performance was comparable for teams consisting of two human operators versus a human operator and OFMspert team.

  8. Distinguishing bias from sensitivity effects in multialternative detection tasks.

    PubMed

    Sridharan, Devarajan; Steinmetz, Nicholas A; Moore, Tirin; Knudsen, Eric I

    2014-08-21

    Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. © 2014 ARVO.

  9. Distinguishing bias from sensitivity effects in multialternative detection tasks

    PubMed Central

    Sridharan, Devarajan; Steinmetz, Nicholas A.; Moore, Tirin; Knudsen, Eric I.

    2014-01-01

    Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. PMID:25146574

  10. A model for combined targeting and tracking tasks in computer applications.

    PubMed

    Senanayake, Ransalu; Hoffmann, Errol R; Goonetilleke, Ravindra S

    2013-11-01

    Current models for targeted-tracking are discussed and shown to be inadequate as a means of understanding the combined task of tracking, as in the Drury's paradigm, and having a final target to be aimed at, as in the Fitts' paradigm. It is shown that the task has to be split into components that are, in general, performed sequentially and have a movement time component dependent on the difficulty of the individual component of the task. In some cases, the task time may be controlled by the Fitts' task difficulty, and in others, it may be dominated by the Drury's task difficulty. Based on an experiment carried out that captured movement time in combinations of visually controlled and ballistic movements, a model for movement time in targeted-tracking was developed.

  11. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  12. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    NASA Astrophysics Data System (ADS)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  13. New analytic results for speciation times in neutral models.

    PubMed

    Gernhard, Tanja

    2008-05-01

    In this paper, we investigate the standard Yule model, and a recently studied model of speciation and extinction, the "critical branching process." We develop an analytic way-as opposed to the common simulation approach-for calculating the speciation times in a reconstructed phylogenetic tree. Simple expressions for the density and the moments of the speciation times are obtained. Methods for dating a speciation event become valuable, if for the reconstructed phylogenetic trees, no time scale is available. A missing time scale could be due to supertree methods, morphological data, or molecular data which violates the molecular clock. Our analytic approach is, in particular, useful for the model with extinction, since simulations of birth-death processes which are conditioned on obtaining n extant species today are quite delicate. Further, simulations are very time consuming for big n under both models.

  14. Analytical fitting model for rough-surface BRDF.

    PubMed

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  15. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

    NASA Astrophysics Data System (ADS)

    Wang, D.; Cui, Y.

    2015-12-01

    The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

  16. Experimentally validated mathematical model of analyte uptake by permeation passive samplers.

    PubMed

    Salim, F; Ioannidis, M; Górecki, T

    2017-11-15

    A mathematical model describing the sampling process in a permeation-based passive sampler was developed and evaluated numerically. The model was applied to the Waterloo Membrane Sampler (WMS), which employs a polydimethylsiloxane (PDMS) membrane as a permeation barrier, and an adsorbent as a receiving phase. Samplers of this kind are used for sampling volatile organic compounds (VOC) from air and soil gas. The model predicts the spatio-temporal variation of sorbed and free analyte concentrations within the sampler components (membrane, sorbent bed and dead volume), from which the uptake rate throughout the sampling process can be determined. A gradual decline in the uptake rate during the sampling process is predicted, which is more pronounced when sampling higher concentrations. Decline of the uptake rate can be attributed to diminishing analyte concentration gradient within the membrane, which results from resistance to mass transfer and the development of analyte concentration gradients within the sorbent bed. The effects of changing the sampler component dimensions on the rate of this decline in the uptake rate can be predicted from the model. Performance of the model was evaluated experimentally for sampling of toluene vapors under controlled conditions. The model predictions proved close to the experimental values. The model provides a valuable tool to predict changes in the uptake rate during sampling, to assign suitable exposure times at different analyte concentration levels, and to optimize the dimensions of the sampler in a manner that minimizes these changes during the sampling period.

  17. The Purpose of Analytical Models from the Perspective of a Data Provider.

    ERIC Educational Resources Information Center

    Sheehan, Bernard S.

    The purpose of analytical models is to reduce complex institutional management problems and situations to simpler proportions and compressed time frames so that human skills of decision makers can be brought to bear most effectively. Also, modeling cultivates the art of management by forcing explicit and analytical consideration of important…

  18. Learning and inference using complex generative models in a spatial localization task.

    PubMed

    Bejjanki, Vikranth R; Knill, David C; Aslin, Richard N

    2016-01-01

    A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.

  19. An analytical poroelastic model for ultrasound elastography imaging of tumors

    NASA Astrophysics Data System (ADS)

    Tauhidul Islam, Md; Chaudhry, Anuj; Unnikrishnan, Ginu; Reddy, J. N.; Righetti, Raffaella

    2018-01-01

    The mechanical behavior of biological tissues has been studied using a number of mechanical models. Due to the relatively high fluid content and mobility, many biological tissues have been modeled as poroelastic materials. Diseases such as cancers are known to alter the poroelastic response of a tissue. Tissue poroelastic properties such as compressibility, interstitial permeability and fluid pressure also play a key role for the assessment of cancer treatments and for improved therapies. At the present time, however, a limited number of poroelastic models for soft tissues are retrievable in the literature, and the ones available are not directly applicable to tumors as they typically refer to uniform tissues. In this paper, we report the analytical poroelastic model for a non-uniform tissue under stress relaxation. Displacement, strain and fluid pressure fields in a cylindrical poroelastic sample containing a cylindrical inclusion during stress relaxation are computed. Finite element simulations are then used to validate the proposed theoretical model. Statistical analysis demonstrates that the proposed analytical model matches the finite element results with less than 0.5% error. The availability of the analytical model and solutions presented in this paper may be useful to estimate diagnostically relevant poroelastic parameters such as interstitial permeability and fluid pressure, and, in general, for a better interpretation of clinically-relevant ultrasound elastography results.

  20. Anisotropic Multishell Analytical Modeling of an Intervertebral Disk Subjected to Axial Compression.

    PubMed

    Demers, Sébastien; Nadeau, Sylvie; Bouzid, Abdel-Hakim

    2016-04-01

    Studies on intervertebral disk (IVD) response to various loads and postures are essential to understand disk's mechanical functions and to suggest preventive and corrective actions in the workplace. The experimental and finite-element (FE) approaches are well-suited for these studies, but validating their findings is difficult, partly due to the lack of alternative methods. Analytical modeling could allow methodological triangulation and help validation of FE models. This paper presents an analytical method based on thin-shell, beam-on-elastic-foundation and composite materials theories to evaluate the stresses in the anulus fibrosus (AF) of an axisymmetric disk composed of multiple thin lamellae. Large deformations of the soft tissues are accounted for using an iterative method and the anisotropic material properties are derived from a published biaxial experiment. The results are compared to those obtained by FE modeling. The results demonstrate the capability of the analytical model to evaluate the stresses at any location of the simplified AF. It also demonstrates that anisotropy reduces stresses in the lamellae. This novel model is a preliminary step in developing valuable analytical models of IVDs, and represents a distinctive groundwork that is able to sustain future refinements. This paper suggests important features that may be included to improve model realism.

  1. Pathways to Identity: Aiding Law Enforcement in Identification Tasks With Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruce, Joseph R.; Scholtz, Jean; Hodges, Duncan

    The nature of identity has changed dramatically in recent years, and has grown in complexity. Identities are defined in multiple domains: biological and psychological elements strongly contribute, but also biographical and cyber elements are necessary to complete the picture. Law enforcement is beginning to adjust to these changes, recognizing its importance in criminal justice. The SuperIdentity project seeks to aid law enforcement officials in their identification tasks through research of techniques for discovering identity traits, generation of statistical models of identity and analysis of identity traits through visualization. We present use cases compiled through user interviews in multiple fields, includingmore » law enforcement, as well as the modeling and visualization tools design to aid in those use cases.« less

  2. Problem-based learning on quantitative analytical chemistry course

    NASA Astrophysics Data System (ADS)

    Fitri, Noor

    2017-12-01

    This research applies problem-based learning method on chemical quantitative analytical chemistry, so called as "Analytical Chemistry II" course, especially related to essential oil analysis. The learning outcomes of this course include aspects of understanding of lectures, the skills of applying course materials, and the ability to identify, formulate and solve chemical analysis problems. The role of study groups is quite important in improving students' learning ability and in completing independent tasks and group tasks. Thus, students are not only aware of the basic concepts of Analytical Chemistry II, but also able to understand and apply analytical concepts that have been studied to solve given analytical chemistry problems, and have the attitude and ability to work together to solve the problems. Based on the learning outcome, it can be concluded that the problem-based learning method in Analytical Chemistry II course has been proven to improve students' knowledge, skill, ability and attitude. Students are not only skilled at solving problems in analytical chemistry especially in essential oil analysis in accordance with local genius of Chemistry Department, Universitas Islam Indonesia, but also have skilled work with computer program and able to understand material and problem in English.

  3. Analytical model of the optical vortex microscope.

    PubMed

    Płocinniczak, Łukasz; Popiołek-Masajada, Agnieszka; Masajada, Jan; Szatkowski, Mateusz

    2016-04-20

    This paper presents an analytical model of the optical vortex scanning microscope. In this microscope the Gaussian beam with an embedded optical vortex is focused into the sample plane. Additionally, the optical vortex can be moved inside the beam, which allows fine scanning of the sample. We provide an analytical solution of the whole path of the beam in the system (within paraxial approximation)-from the vortex lens to the observation plane situated on the CCD camera. The calculations are performed step by step from one optical element to the next. We show that at each step, the expression for light complex amplitude has the same form with only four coefficients modified. We also derive a simple expression for the vortex trajectory of small vortex displacements.

  4. The generation of criteria for selecting analytical tools for landscape management

    Treesearch

    Marilyn Duffey-Armstrong

    1979-01-01

    This paper presents an approach to generating criteria for selecting the analytical tools used to assess visual resources for various landscape management tasks. The approach begins by first establishing the overall parameters for the visual assessment task, and follows by defining the primary requirements of the various sets of analytical tools to be used. Finally,...

  5. Cognitive task analysis: harmonizing tasks to human capacities.

    PubMed

    Neerincx, M A; Griffioen, E

    1996-04-01

    This paper presents the development of a cognitive task analysis that assesses the task load of jobs and provides indicators for the redesign of jobs. General principles of human task performance were selected and, subsequently, integrated into current task modelling techniques. The resulting cognitive task analysis centres around four aspects of task load: the number of actions in a period, the ratio between knowledge- and rule-based actions, lengthy uninterrupted actions, and momentary overloading. The method consists of three stages: (1) construction of a hierarchical task model, (2) a time-line analysis and task load assessment, and (3), if necessary, adjustment of the task model. An application of the cognitive task analysis in railway traffic control showed its benefits over the 'old' task load analysis of the Netherlands Railways. It provided a provisional standard for traffic control jobs, conveyed two load risks -- momentary overloading and underloading -- and resulted in proposals to satisfy the standard and to diminish the two load risk.

  6. Heavy vehicle driver workload assessment. Task 1, task analysis data and protocols review

    DOT National Transportation Integrated Search

    This report contains a review of available task analytic data and protocols pertinent to heavy vehicle operation and determination of the availability and relevance of such data to heavy vehicle driver workload assessment. Additionally, a preliminary...

  7. Investigation of the short argon arc with hot anode. II. Analytical model

    NASA Astrophysics Data System (ADS)

    Khrabry, A.; Kaganovich, I. D.; Nemchinsky, V.; Khodak, A.

    2018-01-01

    A short atmospheric pressure argon arc is studied numerically and analytically. In a short arc with an inter-electrode gap of several millimeters, non-equilibrium effects in plasma play an important role in operation of the arc. High anode temperature leads to electron emission and intensive radiation from its surface. A complete, self-consistent analytical model of the whole arc comprising of models for near-electrode regions, arc column, and a model of heat transfer in cylindrical electrodes was developed. The model predicts the width of non-equilibrium layers and arc column, voltages and plasma profiles in these regions, and heat and ion fluxes to the electrodes. Parametric studies of the arc have been performed for a range of the arc current densities, inter-electrode gap widths, and gas pressures. The model was validated against experimental data and verified by comparison with numerical solution. Good agreement between the analytical model and simulations and reasonable agreement with experimental data were obtained.

  8. Investigation of the short argon arc with hot anode. II. Analytical model

    DOE PAGES

    Khrabry, A.; Kaganovich, I. D.; Nemchinsky, V.; ...

    2018-01-22

    A short atmospheric pressure argon arc is studied numerically and analytically. In a short arc with an inter-electrode gap of several millimeters, non-equilibrium effects in plasma play an important role in operation of the arc. High anode temperature leads to electron emission and intensive radiation from its surface. A complete, self-consistent analytical model of the whole arc comprising of models for near-electrode regions, arc column, and a model of heat transfer in cylindrical electrodes was developed. The model predicts the width of non-equilibrium layers and arc column, voltages and plasma profiles in these regions, and heat and ion fluxes tomore » the electrodes. Parametric studies of the arc have been performed for a range of the arc current densities, inter-electrode gap widths, and gas pressures. The model was validated against experimental data and verified by comparison with numerical solution. In conclusion, good agreement between the analytical model and simulations and reasonable agreement with experimental data were obtained.« less

  9. Investigation of the short argon arc with hot anode. II. Analytical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khrabry, A.; Kaganovich, I. D.; Nemchinsky, V.

    A short atmospheric pressure argon arc is studied numerically and analytically. In a short arc with an inter-electrode gap of several millimeters, non-equilibrium effects in plasma play an important role in operation of the arc. High anode temperature leads to electron emission and intensive radiation from its surface. A complete, self-consistent analytical model of the whole arc comprising of models for near-electrode regions, arc column, and a model of heat transfer in cylindrical electrodes was developed. The model predicts the width of non-equilibrium layers and arc column, voltages and plasma profiles in these regions, and heat and ion fluxes tomore » the electrodes. Parametric studies of the arc have been performed for a range of the arc current densities, inter-electrode gap widths, and gas pressures. The model was validated against experimental data and verified by comparison with numerical solution. In conclusion, good agreement between the analytical model and simulations and reasonable agreement with experimental data were obtained.« less

  10. Does a peer model's task proficiency influence children's solution choice and innovation?

    PubMed

    Wood, Lara A; Kendal, Rachel L; Flynn, Emma G

    2015-11-01

    The current study investigated whether 4- to 6-year-old children's task solution choice was influenced by the past proficiency of familiar peer models and the children's personal prior task experience. Peer past proficiency was established through behavioral assessments of interactions with novel tasks alongside peer and teacher predictions of each child's proficiency. Based on these assessments, one peer model with high past proficiency and one age-, sex-, dominance-, and popularity-matched peer model with lower past proficiency were trained to remove a capsule using alternative solutions from a three-solution artificial fruit task. Video demonstrations of the models were shown to children after they had either a personal successful interaction or no interaction with the task. In general, there was not a strong bias toward the high past-proficiency model, perhaps due to a motivation to acquire multiple methods and the salience of other transmission biases. However, there was some evidence of a model-based past-proficiency bias; when the high past-proficiency peer matched the participants' original solution, there was increased use of that solution, whereas if the high past-proficiency peer demonstrated an alternative solution, there was increased use of the alternative social solution and novel solutions. Thus, model proficiency influenced innovation. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Individual differences in emotion processing: how similar are diffusion model parameters across tasks?

    PubMed

    Mueller, Christina J; White, Corey N; Kuchinke, Lars

    2017-11-27

    The goal of this study was to replicate findings of diffusion model parameters capturing emotion effects in a lexical decision task and investigating whether these findings extend to other tasks of implicit emotion processing. Additionally, we were interested in the stability of diffusion model parameters across emotional stimuli and tasks for individual subjects. Responses to words in a lexical decision task were compared with responses to faces in a gender categorization task for stimuli of the emotion categories: happy, neutral and fear. Main effects of emotion as well as stability of emerging response style patterns as evident in diffusion model parameters across these tasks were analyzed. Based on earlier findings, drift rates were assumed to be more similar in response to stimuli of the same emotion category compared to stimuli of a different emotion category. Results showed that emotion effects of the tasks differed with a processing advantage for happy followed by neutral and fear-related words in the lexical decision task and a processing advantage for neutral followed by happy and fearful faces in the gender categorization task. Both emotion effects were captured in estimated drift rate parameters-and in case of the lexical decision task also in the non-decision time parameters. A principal component analysis showed that contrary to our hypothesis drift rates were more similar within a specific task context than within a specific emotion category. Individual response patterns of subjects across tasks were evident in significant correlations regarding diffusion model parameters including response styles, non-decision times and information accumulation.

  12. Analytic uncertainty and sensitivity analysis of models with input correlations

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  13. A cognitive task analysis of a visual analytic workflow: Exploring molecular interaction networks in systems biology.

    PubMed

    Mirel, Barbara; Eichinger, Felix; Keller, Benjamin J; Kretzler, Matthias

    2011-03-21

    Bioinformatics visualization tools are often not robust enough to support biomedical specialists’ complex exploratory analyses. Tools need to accommodate the workflows that scientists actually perform for specific translational research questions. To understand and model one of these workflows, we conducted a case-based, cognitive task analysis of a biomedical specialist’s exploratory workflow for the question: What functional interactions among gene products of high throughput expression data suggest previously unknown mechanisms of a disease? From our cognitive task analysis four complementary representations of the targeted workflow were developed. They include: usage scenarios, flow diagrams, a cognitive task taxonomy, and a mapping between cognitive tasks and user-centered visualization requirements. The representations capture the flows of cognitive tasks that led a biomedical specialist to inferences critical to hypothesizing. We created representations at levels of detail that could strategically guide visualization development, and we confirmed this by making a trial prototype based on user requirements for a small portion of the workflow. Our results imply that visualizations should make available to scientific users “bundles of features” consonant with the compositional cognitive tasks purposefully enacted at specific points in the workflow. We also highlight certain aspects of visualizations that: (a) need more built-in flexibility; (b) are critical for negotiating meaning; and (c) are necessary for essential metacognitive support.

  14. Computational models of the Posner simple and choice reaction time tasks

    PubMed Central

    Feher da Silva, Carolina; Baldo, Marcus V. C.

    2015-01-01

    The landmark experiments by Posner in the late 1970s have shown that reaction time (RT) is faster when the stimulus appears in an expected location, as indicated by a cue; since then, the so-called Posner task has been considered a “gold standard” test of spatial attention. It is thus fundamental to understand the neural mechanisms involved in performing it. To this end, we have developed a Bayesian detection system and small integrate-and-fire neural networks, which modeled sensory and motor circuits, respectively, and optimized them to perform the Posner task under different cue type proportions and noise levels. In doing so, main findings of experimental research on RT were replicated: the relative frequency effect, suboptimal RTs and significant error rates due to noise and invalid cues, slower RT for choice RT tasks than for simple RT tasks, fastest RTs for valid cues and slowest RTs for invalid cues. Analysis of the optimized systems revealed that the employed mechanisms were consistent with related findings in neurophysiology. Our models predict that (1) the results of a Posner task may be affected by the relative frequency of valid and neutral trials, (2) in simple RT tasks, input from multiple locations are added together to compose a stronger signal, and (3) the cue affects motor circuits more strongly in choice RT tasks than in simple RT tasks. In discussing the computational demands of the Posner task, attention has often been described as a filter that protects the nervous system, whose capacity is limited, from information overload. Our models, however, reveal that the main problems that must be overcome to perform the Posner task effectively are distinguishing signal from external noise and selecting the appropriate response in the presence of internal noise. PMID:26190997

  15. Fast analytical model of MZI micro-opto-mechanical pressure sensor

    NASA Astrophysics Data System (ADS)

    Rochus, V.; Jansen, R.; Goyvaerts, J.; Neutens, P.; O’Callaghan, J.; Rottenberg, X.

    2018-06-01

    This paper presents a fast analytical procedure in order to design a micro-opto-mechanical pressure sensor (MOMPS) taking into account the mechanical nonlinearity and the optical losses. A realistic model of the photonic MZI is proposed, strongly coupled to a nonlinear mechanical model of the membrane. Based on the membrane dimensions, the residual stress, the position of the waveguide, the optical wavelength and the phase variation due to the opto-mechanical coupling, we derive an analytical model which allows us to predict the response of the total system. The effect of the nonlinearity and the losses on the total performance are carefully studied and measurements on fabricated devices are used to validate the model. Finally, a design procedure is proposed in order to realize fast design of this new type of pressure sensor.

  16. Decision-analytic modeling studies: An overview for clinicians using multiple myeloma as an example.

    PubMed

    Rochau, U; Jahn, B; Qerimi, V; Burger, E A; Kurzthaler, C; Kluibenschaedl, M; Willenbacher, E; Gastl, G; Willenbacher, W; Siebert, U

    2015-05-01

    The purpose of this study was to provide a clinician-friendly overview of decision-analytic models evaluating different treatment strategies for multiple myeloma (MM). We performed a systematic literature search to identify studies evaluating MM treatment strategies using mathematical decision-analytic models. We included studies that were published as full-text articles in English, and assessed relevant clinical endpoints, and summarized methodological characteristics (e.g., modeling approaches, simulation techniques, health outcomes, perspectives). Eleven decision-analytic modeling studies met our inclusion criteria. Five different modeling approaches were adopted: decision-tree modeling, Markov state-transition modeling, discrete event simulation, partitioned-survival analysis and area-under-the-curve modeling. Health outcomes included survival, number-needed-to-treat, life expectancy, and quality-adjusted life years. Evaluated treatment strategies included novel agent-based combination therapies, stem cell transplantation and supportive measures. Overall, our review provides a comprehensive summary of modeling studies assessing treatment of MM and highlights decision-analytic modeling as an important tool for health policy decision making. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Analytical model for the radio-frequency sheath

    NASA Astrophysics Data System (ADS)

    Czarnetzki, Uwe

    2013-12-01

    A simple analytical model for the planar radio-frequency (rf) sheath in capacitive discharges is developed that is based on the assumptions of a step profile for the electron front, charge exchange collisions with constant cross sections, negligible ionization within the sheath, and negligible ion dynamics. The continuity, momentum conservation, and Poisson equations are combined in a single integro-differential equation for the square of the ion drift velocity, the so called sheath equation. Starting from the kinetic Boltzmann equation, special attention is paid to the derivation and the validity of the approximate fluid equation for momentum balance. The integrals in the sheath equation appear in the screening function which considers the relative contribution of the temporal mean of the electron density to the space charge in the sheath. It is shown that the screening function is quite insensitive to variations of the effective sheath parameters. The two parameters defining the solution are the ratios of the maximum sheath extension to the ion mean free path and the Debye length, respectively. A simple general analytic expression for the screening function is introduced. By means of this expression approximate analytical solutions are obtained for the collisionless as well as the highly collisional case that compare well with the exact numerical solution. A simple transition formula allows application to all degrees of collisionality. In addition, the solutions are used to calculate all static and dynamic quantities of the sheath, e.g., the ion density, fields, and currents. Further, the rf Child-Langmuir laws for the collisionless as well as the collisional case are derived. An essential part of the model is the a priori knowledge of the wave form of the sheath voltage. This wave form is derived on the basis of a cubic charge-voltage relation for individual sheaths, considering both sheaths and the self-consistent self-bias in a discharge with arbitrary

  18. Analytical model for the radio-frequency sheath.

    PubMed

    Czarnetzki, Uwe

    2013-12-01

    A simple analytical model for the planar radio-frequency (rf) sheath in capacitive discharges is developed that is based on the assumptions of a step profile for the electron front, charge exchange collisions with constant cross sections, negligible ionization within the sheath, and negligible ion dynamics. The continuity, momentum conservation, and Poisson equations are combined in a single integro-differential equation for the square of the ion drift velocity, the so called sheath equation. Starting from the kinetic Boltzmann equation, special attention is paid to the derivation and the validity of the approximate fluid equation for momentum balance. The integrals in the sheath equation appear in the screening function which considers the relative contribution of the temporal mean of the electron density to the space charge in the sheath. It is shown that the screening function is quite insensitive to variations of the effective sheath parameters. The two parameters defining the solution are the ratios of the maximum sheath extension to the ion mean free path and the Debye length, respectively. A simple general analytic expression for the screening function is introduced. By means of this expression approximate analytical solutions are obtained for the collisionless as well as the highly collisional case that compare well with the exact numerical solution. A simple transition formula allows application to all degrees of collisionality. In addition, the solutions are used to calculate all static and dynamic quantities of the sheath, e.g., the ion density, fields, and currents. Further, the rf Child-Langmuir laws for the collisionless as well as the collisional case are derived. An essential part of the model is the a priori knowledge of the wave form of the sheath voltage. This wave form is derived on the basis of a cubic charge-voltage relation for individual sheaths, considering both sheaths and the self-consistent self-bias in a discharge with arbitrary

  19. Pitting intuitive and analytical thinking against each other: the case of transitivity.

    PubMed

    Rusou, Zohar; Zakay, Dan; Usher, Marius

    2013-06-01

    Identifying which thinking mode, intuitive or analytical, yields better decisions has been a major subject of inquiry by decision-making researchers. Yet studies show contradictory results. One possibility is that the ambiguity is due to the variability in experimental conditions across studies. Our hypothesis is that decision quality depends critically on the level of compatibility between the thinking mode employed in the decision and the nature of the decision-making task. In two experiments, we pitted intuition and analytical thinking against each other on tasks that were either mainly intuitive or mainly analytical. Thinking modes, as well as task characteristics, were manipulated in a factorial design, with choice transitivity as the dependent measure. Results showed higher choice consistency (transitivity) when thinking mode and the characteristics of the decision task were compatible.

  20. Force 2025 and Beyond Strategic Force Design Analytic Model

    DTIC Science & Technology

    2017-01-12

    depiction of the core ideas of our force design model. Figure 1: Description of Force Design Model Figure 2 shows an overview of our methodology ...the F2025B Force Design Analytic Model research conducted by TRAC- MTRY and the Naval Postgraduate School. Our research develops a methodology for...designs. We describe a data development methodology that characterizes the data required to construct a force design model using our approach. We

  1. ICA model order selection of task co-activation networks.

    PubMed

    Ray, Kimberly L; McKay, D Reese; Fox, Peter M; Riedel, Michael C; Uecker, Angela M; Beckmann, Christian F; Smith, Stephen M; Fox, Peter T; Laird, Angela R

    2013-01-01

    Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders.

  2. ICA model order selection of task co-activation networks

    PubMed Central

    Ray, Kimberly L.; McKay, D. Reese; Fox, Peter M.; Riedel, Michael C.; Uecker, Angela M.; Beckmann, Christian F.; Smith, Stephen M.; Fox, Peter T.; Laird, Angela R.

    2013-01-01

    Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders. PMID:24339802

  3. Technical, analytical and computer support

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development of a rigorous mathematical model for the design and performance analysis of cylindrical silicon-germanium thermoelectric generators is reported that consists of two parts, a steady-state (static) and a transient (dynamic) part. The material study task involves the definition and implementation of a material study that aims to experimentally characterize the long term behavior of the thermoelectric properties of silicon-germanium alloys as a function of temperature. Analytical and experimental efforts are aimed at the determination of the sublimation characteristics of silicon germanium alloys and the study of sublimation effects on RTG performance. Studies are also performed on a variety of specific topics on thermoelectric energy conversion.

  4. A flexible influence of affective feelings on creative and analytic performance.

    PubMed

    Huntsinger, Jeffrey R; Ray, Cara

    2016-09-01

    Considerable research shows that positive affect improves performance on creative tasks and negative affect improves performance on analytic tasks. The present research entertained the idea that affective feelings have flexible, rather than fixed, effects on cognitive performance. Consistent with the idea that positive and negative affect signal the value of accessible processing inclinations, the influence of affective feelings on performance on analytic or creative tasks was found to be flexibly responsive to the relative accessibility of different styles of processing (i.e., heuristic vs. systematic, global vs. local). When a global processing orientation was accessible happy participants generated more creative uses for a brick (Experiment 1), successfully solved more remote associates and insight problems (Experiment 2) and displayed broader categorization (Experiment 3) than those in sad moods. When a local processing orientation was accessible this pattern reversed. When a heuristic processing style was accessible happy participants were more likely to commit the conjunction fallacy (Experiment 3) and showed less pronounced anchoring effects (Experiment 4) than sad participants. When a systematic processing style was accessible this pattern reversed. Implications of these results for relevant affect-cognition models are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodríguez-Ramírez, J. C.; Raga, A. C.; Lora, V.

    2016-12-20

    We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. Wemore » compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.« less

  6. Analytical Finite Element Simulation Model for Structural Crashworthiness Prediction

    DOT National Transportation Integrated Search

    1974-02-01

    The analytical development and appropriate derivations are presented for a simulation model of vehicle crashworthiness prediction. Incremental equations governing the nonlinear elasto-plastic dynamic response of three-dimensional frame structures are...

  7. "Analytic continuation" of = 2 minimal model

    NASA Astrophysics Data System (ADS)

    Sugawara, Yuji

    2014-04-01

    In this paper we discuss what theory should be identified as the "analytic continuation" with N rArr -N of the {mathcal N}=2 minimal model with the central charge hat {c} = 1 - frac {2}{N}. We clarify how the elliptic genus of the expected model is written in terms of holomorphic linear combinations of the "modular completions" introduced in [T. Eguchi and Y. Sugawara, JHEP 1103, 107 (2011)] in the SL(2)_{N+2}/U(1) supercoset theory. We further discuss how this model could be interpreted as a kind of model of the SL(2)_{N+2}/U(1) supercoset in the (widetilde {{R}},widetilde {R}) sector, in which only the discrete spectrum appears in the torus partition function and the potential IR divergence due to the non-compactness of the target space is removed. We also briefly discuss possible definitions of the sectors with other spin structures.

  8. Accurate analytical modeling of junctionless DG-MOSFET by green's function approach

    NASA Astrophysics Data System (ADS)

    Nandi, Ashutosh; Pandey, Nilesh

    2017-11-01

    An accurate analytical model of Junctionless double gate MOSFET (JL-DG-MOSFET) in the subthreshold regime of operation is developed in this work using green's function approach. The approach considers 2-D mixed boundary conditions and multi-zone techniques to provide an exact analytical solution to 2-D Poisson's equation. The Fourier coefficients are calculated correctly to derive the potential equations that are further used to model the channel current and subthreshold slope of the device. The threshold voltage roll-off is computed from parallel shifts of Ids-Vgs curves between the long channel and short-channel devices. It is observed that the green's function approach of solving 2-D Poisson's equation in both oxide and silicon region can accurately predict channel potential, subthreshold current (Isub), threshold voltage (Vt) roll-off and subthreshold slope (SS) of both long & short channel devices designed with different doping concentrations and higher as well as lower tsi/tox ratio. All the analytical model results are verified through comparisons with TCAD Sentaurus simulation results. It is observed that the model matches quite well with TCAD device simulations.

  9. Analytic drain current model for III-V cylindrical nanowire transistors

    NASA Astrophysics Data System (ADS)

    Marin, E. G.; Ruiz, F. G.; Schmidt, V.; Godoy, A.; Riel, H.; Gámiz, F.

    2015-07-01

    An analytical model is proposed to determine the drain current of III-V cylindrical nanowires (NWs). The model uses the gradual channel approximation and takes into account the complete analytical solution of the Poisson and Schrödinger equations for the Γ-valley and for an arbitrary number of subbands. Fermi-Dirac statistics are considered to describe the 1D electron gas in the NWs, being the resulting recursive Fermi-Dirac integral of order -1/2 successfully integrated under reasonable assumptions. The model has been validated against numerical simulations showing excellent agreement for different semiconductor materials, diameters up to 40 nm, gate overdrive biases up to 0.7 V, and densities of interface states up to 1013eV-1cm-2 .

  10. AN ANALYTIC RADIATIVE-CONVECTIVE MODEL FOR PLANETARY ATMOSPHERES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Tyler D.; Catling, David C., E-mail: robinson@astro.washington.edu

    2012-09-20

    We present an analytic one-dimensional radiative-convective model of the thermal structure of planetary atmospheres. Our model assumes that thermal radiative transfer is gray and can be represented by the two-stream approximation. Model atmospheres are assumed to be in hydrostatic equilibrium, with a power-law scaling between the atmospheric pressure and the gray thermal optical depth. The convective portions of our models are taken to follow adiabats that account for condensation of volatiles through a scaling parameter to the dry adiabat. By combining these assumptions, we produce simple, analytic expressions that allow calculations of the atmospheric-pressure-temperature profile, as well as expressions formore » the profiles of thermal radiative flux and convective flux. We explore the general behaviors of our model. These investigations encompass (1) worlds where atmospheric attenuation of sunlight is weak, which we show tend to have relatively high radiative-convective boundaries; (2) worlds with some attenuation of sunlight throughout the atmosphere, which we show can produce either shallow or deep radiative-convective boundaries, depending on the strength of sunlight attenuation; and (3) strongly irradiated giant planets (including hot Jupiters), where we explore the conditions under which these worlds acquire detached convective regions in their mid-tropospheres. Finally, we validate our model and demonstrate its utility through comparisons to the average observed thermal structure of Venus, Jupiter, and Titan, and by comparing computed flux profiles to more complex models.« less

  11. Prediction task guided representation learning of medical codes in EHR.

    PubMed

    Cui, Liwen; Xie, Xiaolei; Shen, Zuojun

    2018-06-18

    There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.

  12. Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling

    ERIC Educational Resources Information Center

    Oort, Frans J.; Jak, Suzanne

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…

  13. Catastrophe models for cognitive workload and fatigue in N-back tasks.

    PubMed

    Guastello, Stephen J; Reiter, Katherine; Malon, Matthew; Timm, Paul; Shircel, Anton; Shaline, James

    2015-04-01

    N-back tasks place a heavy load on working memory, and thus make good candidates for studying cognitive workload and fatigue (CWLF). This study extended previous work on CWLF which separated the two phenomena with two cusp catastrophe models. Participants were 113 undergraduates who completed 2-back and 3-back tasks with both auditory and visual stimuli simultaneously. Task data were complemented by several measures hypothesized to be related to cognitive elasticity and compensatory abilities and the NASA TLX ratings of subjective workload. The adjusted R2 was .980 for the workload model, which indicated a highly accurate prediction with six bifurcation (elasticity versus rigidity) effects: algebra flexibility, TLX performance, effort, and frustration; and psychosocial measures of inflexibility and monitoring. There were also two cognitive load effects (asymmetry): 2 vs. 3-back and TLX temporal demands. The adjusted R2 was .454 for the fatigue model, which contained two bifurcation variables indicating the amount of work done, and algebra flexibility as the compensatory ability variable. Both cusp models were stronger than the next best linear alternative model. The study makes an important step forward by uncovering an apparently complete model for workload, finding the role of subjective workload in the context of performance dynamics, and finding CWLF dynamics in yet another type of memory-intensive task. The results were also consistent with the developing notion that performance deficits induced by workload and deficits induced by fatigue result from the impact of the task on the workspace and executive functions of working memory respectively.

  14. A fast analytical undulator model for realistic high-energy FEL simulations

    NASA Astrophysics Data System (ADS)

    Tatchyn, R.; Cremer, T.

    1997-02-01

    A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.

  15. Simple Analytic Model for Nanowire Array Polarizers

    NASA Astrophysics Data System (ADS)

    Pelletier, Vincent; Asakawa, Koji; Wu, Mingshaw; Register, Richard; Chaikin, Paul

    2006-03-01

    Cylinder-forming diblock copolymers can be used to pattern nanowire arrays on a transparent substrate to be used as a polarizer, as described by Koji Asakawa in a complementary talk at this meeting. With a 33nm period, these wire arrays can polarize UV radiation, which is of great interest in lithography, astronomy and other areas. One can gain an analytical understanding of such an array made of non-perfectly conducting material of finite thickness using a model with an appropriately averaged complex dielectric function in an infinite wavelength approximation. This analysis predicts that the grid can go from an E-polarizer to an H-polarizer as the wavelength decreases below a cross-over wavelength, and experimental data confirm this cross-over. The overall response of the polarizing grid depends primarily on the plasma frequency of the metal used and the volume fraction of the nanowires, as well as the grid thickness. A numerical approach is also used to confirm the analytical model and assess departure from infinite wavelength effects. For our array dimensions, the polarization is only slightly different from this approximation for wavelengths down to 150nm.

  16. Task-driven optimization of CT tube current modulation and regularization in model-based iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2017-06-01

    Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS

  17. An analytically linearized helicopter model with improved modeling accuracy

    NASA Technical Reports Server (NTRS)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  18. Analytical thermal model for end-pumped solid-state lasers

    NASA Astrophysics Data System (ADS)

    Cini, L.; Mackenzie, J. I.

    2017-12-01

    Fundamentally power-limited by thermal effects, the design challenge for end-pumped "bulk" solid-state lasers depends upon knowledge of the temperature gradients within the gain medium. We have developed analytical expressions that can be used to model the temperature distribution and thermal-lens power in end-pumped solid-state lasers. Enabled by the inclusion of a temperature-dependent thermal conductivity, applicable from cryogenic to elevated temperatures, typical pumping distributions are explored and the results compared with accepted models. Key insights are gained through these analytical expressions, such as the dependence of the peak temperature rise in function of the boundary thermal conductance to the heat sink. Our generalized expressions provide simple and time-efficient tools for parametric optimization of the heat distribution in the gain medium based upon the material and pumping constraints.

  19. Analytical Models of Legislative Texts for Muslim Scholars

    ERIC Educational Resources Information Center

    Alwan, Ammar Abdullah Naseh; Yusoff, Mohd Yakubzulkifli Bin Mohd; Al-Hami, Mohammad Said M.

    2011-01-01

    The significance of the analytical models in traditional Islamic studies is that they contribute in sharpening the intellectual capacity of the students of Islamic studies. Research literature in Islamic studies has descriptive side predominantly; the information is gathered and compiled and rarely analyzed properly. This weakness is because of…

  20. Analytic Modeling of Pressurization and Cryogenic Propellant

    NASA Technical Reports Server (NTRS)

    Corpening, Jeremy H.

    2010-01-01

    An analytic model for pressurization and cryogenic propellant conditions during all mission phases of any liquid rocket based vehicle has been developed and validated. The model assumes the propellant tanks to be divided into five nodes and also implements an empirical correlation for liquid stratification if desired. The five nodes include a tank wall node exposed to ullage gas, an ullage gas node, a saturated propellant vapor node at the liquid-vapor interface, a liquid node, and a tank wall node exposed to liquid. The conservation equations of mass and energy are then applied across all the node boundaries and, with the use of perfect gas assumptions, explicit solutions for ullage and liquid conditions are derived. All fluid properties are updated real time using NIST Refprop.1 Further, mass transfer at the liquid-vapor interface is included in the form of evaporation, bulk boiling of liquid propellant, and condensation given the appropriate conditions for each. Model validation has proven highly successful against previous analytic models and various Saturn era test data and reasonably successful against more recent LH2 tank self pressurization ground test data. Finally, this model has been applied to numerous design iterations for the Altair Lunar Lander, Ares V Core Stage, and Ares V Earth Departure Stage in order to characterize Helium and autogenous pressurant requirements, propellant lost to evaporation and thermodynamic venting to maintain propellant conditions, and non-uniform tank draining in configurations utilizing multiple LH2 or LO2 propellant tanks. In conclusion, this model provides an accurate and efficient means of analyzing multiple design configurations for any cryogenic propellant tank in launch, low-acceleration coast, or in-space maneuvering and supplies the user with pressurization requirements, unusable propellants from evaporation and liquid stratification, and general ullage gas, liquid, and tank wall conditions as functions of time.

  1. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  2. Strategy Generalization across Orientation Tasks: Testing a Computational Cognitive Model

    ERIC Educational Resources Information Center

    Gunzelmann, Glenn

    2008-01-01

    Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human…

  3. Analytical Solution for the Anisotropic Rabi Model: Effects of Counter-Rotating Terms

    NASA Astrophysics Data System (ADS)

    Zhang, Guofeng; Zhu, Hanjie

    2015-03-01

    The anisotropic Rabi model, which was proposed recently, differs from the original Rabi model: the rotating and counter-rotating terms are governed by two different coupling constants. This feature allows us to vary the counter-rotating interaction independently and explore the effects of it on some quantum properties. In this paper, we eliminate the counter-rotating terms approximately and obtain the analytical energy spectrums and wavefunctions. These analytical results agree well with the numerical calculations in a wide range of the parameters including the ultrastrong coupling regime. In the weak counter-rotating coupling limit we find out that the counter-rotating terms can be considered as the shifts to the parameters of the Jaynes-Cummings model. This modification shows the validness of the rotating-wave approximation on the assumption of near-resonance and relatively weak coupling. Moreover, the analytical expressions of several physics quantities are also derived, and the results show the break-down of the U(1)-symmetry and the deviation from the Jaynes-Cummings model.

  4. Analytical solution for the anisotropic Rabi model: effects of counter-rotating terms.

    PubMed

    Zhang, Guofeng; Zhu, Hanjie

    2015-03-04

    The anisotropic Rabi model, which was proposed recently, differs from the original Rabi model: the rotating and counter-rotating terms are governed by two different coupling constants. This feature allows us to vary the counter-rotating interaction independently and explore the effects of it on some quantum properties. In this paper, we eliminate the counter-rotating terms approximately and obtain the analytical energy spectrums and wavefunctions. These analytical results agree well with the numerical calculations in a wide range of the parameters including the ultrastrong coupling regime. In the weak counter-rotating coupling limit we find out that the counter-rotating terms can be considered as the shifts to the parameters of the Jaynes-Cummings model. This modification shows the validness of the rotating-wave approximation on the assumption of near-resonance and relatively weak coupling. Moreover, the analytical expressions of several physics quantities are also derived, and the results show the break-down of the U(1)-symmetry and the deviation from the Jaynes-Cummings model.

  5. An analytically solvable three-body break-up model problem in hyperspherical coordinates

    NASA Astrophysics Data System (ADS)

    Ancarani, L. U.; Gasaneo, G.; Mitnik, D. M.

    2012-10-01

    An analytically solvable S-wave model for three particles break-up processes is presented. The scattering process is represented by a non-homogeneous Coulombic Schrödinger equation where the driven term is given by a Coulomb-like interaction multiplied by the product of a continuum wave function and a bound state in the particles coordinates. The closed form solution is derived in hyperspherical coordinates leading to an analytic expression for the associated scattering transition amplitude. The proposed scattering model contains most of the difficulties encountered in real three-body scattering problem, e.g., non-separability in the electrons' spherical coordinates and Coulombic asymptotic behavior. Since the coordinates' coupling is completely different, the model provides an alternative test to that given by the Temkin-Poet model. The knowledge of the analytic solution provides an interesting benchmark to test numerical methods dealing with the double continuum, in particular in the asymptotic regions. An hyperspherical Sturmian approach recently developed for three-body collisional problems is used to reproduce to high accuracy the analytical results. In addition to this, we generalized the model generating an approximate wave function possessing the correct radial asymptotic behavior corresponding to an S-wave three-body Coulomb problem. The model allows us to explore the typical structure of the solution of a three-body driven equation, to identify three regions (the driven, the Coulombic and the asymptotic), and to analyze how far one has to go to extract the transition amplitude.

  6. An Empirical Human Controller Model for Preview Tracking Tasks.

    PubMed

    van der El, Kasper; Pool, Daan M; Damveld, Herman J; van Paassen, Marinus Rene M; Mulder, Max

    2016-11-01

    Real-life tracking tasks often show preview information to the human controller about the future track to follow. The effect of preview on manual control behavior is still relatively unknown. This paper proposes a generic operator model for preview tracking, empirically derived from experimental measurements. Conditions included pursuit tracking, i.e., without preview information, and tracking with 1 s of preview. Controlled element dynamics varied between gain, single integrator, and double integrator. The model is derived in the frequency domain, after application of a black-box system identification method based on Fourier coefficients. Parameter estimates are obtained to assess the validity of the model in both the time domain and frequency domain. Measured behavior in all evaluated conditions can be captured with the commonly used quasi-linear operator model for compensatory tracking, extended with two viewpoints of the previewed target. The derived model provides new insights into how human operators use preview information in tracking tasks.

  7. A model for the pilot's use of motion cues in roll-axis tracking tasks

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Junker, A. M.

    1977-01-01

    Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.

  8. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  9. User-Centered Evaluation of Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean C.

    Visual analytics systems are becoming very popular. More domains now use interactive visualizations to analyze the ever-increasing amount and heterogeneity of data. More novel visualizations are being developed for more tasks and users. We need to ensure that these systems can be evaluated to determine that they are both useful and usable. A user-centered evaluation for visual analytics needs to be developed for these systems. While many of the typical human-computer interaction (HCI) evaluation methodologies can be applied as is, others will need modification. Additionally, new functionality in visual analytics systems needs new evaluation methodologies. There is a difference betweenmore » usability evaluations and user-centered evaluations. Usability looks at the efficiency, effectiveness, and user satisfaction of users carrying out tasks with software applications. User-centered evaluation looks more specifically at the utility provided to the users by the software. This is reflected in the evaluations done and in the metrics used. In the visual analytics domain this is very challenging as users are most likely experts in a particular domain, the tasks they do are often not well defined, the software they use needs to support large amounts of different kinds of data, and often the tasks last for months. These difficulties are discussed more in the section on User-centered Evaluation. Our goal is to provide a discussion of user-centered evaluation practices for visual analytics, including existing practices that can be carried out and new methodologies and metrics that need to be developed and agreed upon by the visual analytics community. The material provided here should be of use for both researchers and practitioners in the field of visual analytics. Researchers and practitioners in HCI and interested in visual analytics will find this information useful as well as a discussion on changes that need to be made to current HCI practices to make them more

  10. Using Modeling Tasks to Facilitate the Development of Percentages

    ERIC Educational Resources Information Center

    Shahbari, Juhaina Awawdeh; Peled, Irit

    2016-01-01

    This study analyzes the development of percentages knowledge by seventh graders given a sequence of activities starting with a realistic modeling task, in which students were expected to create a model that would facilitate the reinvention of percentages. In the first two activities, students constructed their own pricing model using fractions and…

  11. An Improved Analytic Model for Microdosimeter Response

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.; Xapsos, Michael A.

    2001-01-01

    An analytic model used to predict energy deposition fluctuations in a microvolume by ions through direct events is improved to include indirect delta ray events. The new model can now account for the increase in flux at low lineal energy when the ions are of very high energy. Good agreement is obtained between the calculated results and available data for laboratory ion beams. Comparison of GCR (galactic cosmic ray) flux between Shuttle TEPC (tissue equivalent proportional counter) flight data and current calculations draws a different assessment of developmental work required for the GCR transport code (HZETRN) than previously concluded.

  12. Transient vibration analytical modeling and suppressing for vibration absorber system under impulse excitation

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Yang, Bintang; Yu, Hu; Gao, Yulong

    2017-04-01

    The impulse excitation of mechanism causes transient vibration. In order to achieve adaptive transient vibration control, a method which can exactly model the response need to be proposed. This paper presents an analytical model to obtain the response of the primary system attached with dynamic vibration absorber (DVA) under impulse excitation. The impulse excitation which can be divided into single-impulse excitation and multi-impulse excitation is simplified as sinusoidal wave to establish the analytical model. To decouple the differential governing equations, a transform matrix is applied to convert the response from the physical coordinate to model coordinate. Therefore, the analytical response in the physical coordinate can be obtained by inverse transformation. The numerical Runge-Kutta method and experimental tests have demonstrated the effectiveness of the analytical model proposed. The wavelet of the response indicates that the transient vibration consists of components with multiple frequencies, and it shows that the modeling results coincide with the experiments. The optimizing simulations based on genetic algorithm and experimental tests demonstrate that the transient vibration of the primary system can be decreased by changing the stiffness of the DVA. The results presented in this paper are the foundations for us to develop the adaptive transient vibration absorber in the future.

  13. Analytical model for the density distribution in the Io plasma torus

    NASA Technical Reports Server (NTRS)

    Mei, YI; Thorne, Richard M.; Bagenal, Fran

    1995-01-01

    An analytical model is developed for the diffusive equilibrium plasma density distribution in the Io plasma torus. The model has been employed successfully to follow the ray path of plasma waves in the multi-ion Jovian magnetosphere; it would also be valuable for other studies of the Io torus that require a smooth and continuous description of the plasma density and its gradients. Validity of the analytical treatment requires that the temperature of thermal electrons be much lower than the ion temperature and that superthermal electrons be much less abundant than the thermal electrons; these two conditions are satisfied in the warm outer region of the Io torus from L = 6 to L = 10. The analytical solutions agree well with exact numerical calculations for the most dense portion of the Io torus within 30 deg of the equator.

  14. Effect(s) of Language Tasks on Severity of Disfluencies in Preschool Children with Stuttering

    ERIC Educational Resources Information Center

    Zamani, Peyman; Ravanbakhsh, Majid; Weisi, Farzad; Rashedi, Vahid; Naderi, Sara; Hosseinzadeh, Ayub; Rezaei, Mohammad

    2017-01-01

    Speech disfluency in children can be increased or decreased depending on the type of linguistic task presented to them. In this study, the effect of sentence imitation and sentence modeling on severity of speech disfluencies in preschool children with stuttering is investigated. In this cross-sectional descriptive analytical study, 58 children…

  15. Analytic Scattering and Refraction Models for Exoplanet Transit Spectra

    NASA Astrophysics Data System (ADS)

    Robinson, Tyler D.; Fortney, Jonathan J.; Hubbard, William B.

    2017-12-01

    Observations of exoplanet transit spectra are essential to understanding the physics and chemistry of distant worlds. The effects of opacity sources and many physical processes combine to set the shape of a transit spectrum. Two such key processes—refraction and cloud and/or haze forward-scattering—have seen substantial recent study. However, models of these processes are typically complex, which prevents their incorporation into observational analyses and standard transit spectrum tools. In this work, we develop analytic expressions that allow for the efficient parameterization of forward-scattering and refraction effects in transit spectra. We derive an effective slant optical depth that includes a correction for forward-scattered light, and present an analytic form of this correction. We validate our correction against a full-physics transit spectrum model that includes scattering, and we explore the extent to which the omission of forward-scattering effects may bias models. Also, we verify a common analytic expression for the location of a refractive boundary, which we express in terms of the maximum pressure probed in a transit spectrum. This expression is designed to be easily incorporated into existing tools, and we discuss how the detection of a refractive boundary could help indicate the background atmospheric composition by constraining the bulk refractivity of the atmosphere. Finally, we show that opacity from Rayleigh scattering and collision-induced absorption will outweigh the effects of refraction for Jupiter-like atmospheres whose equilibrium temperatures are above 400-500 K.

  16. Rapid B-rep model preprocessing for immersogeometric analysis using analytic surfaces

    PubMed Central

    Wang, Chenglong; Xu, Fei; Hsu, Ming-Chen; Krishnamurthy, Adarsh

    2017-01-01

    Computational fluid dynamics (CFD) simulations of flow over complex objects have been performed traditionally using fluid-domain meshes that conform to the shape of the object. However, creating shape conforming meshes for complicated geometries like automobiles require extensive geometry preprocessing. This process is usually tedious and requires modifying the geometry, including specialized operations such as defeaturing and filling of small gaps. Hsu et al. (2016) developed a novel immersogeometric fluid-flow method that does not require the generation of a boundary-fitted mesh for the fluid domain. However, their method used the NURBS parameterization of the surfaces for generating the surface quadrature points to enforce the boundary conditions, which required the B-rep model to be converted completely to NURBS before analysis can be performed. This conversion usually leads to poorly parameterized NURBS surfaces and can lead to poorly trimmed or missing surface features. In addition, converting simple geometries such as cylinders to NURBS imposes a performance penalty since these geometries have to be dealt with as rational splines. As a result, the geometry has to be inspected again after conversion to ensure analysis compatibility and can increase the computational cost. In this work, we have extended the immersogeometric method to generate surface quadrature points directly using analytic surfaces. We have developed quadrature rules for all four kinds of analytic surfaces: planes, cones, spheres, and toroids. We have also developed methods for performing adaptive quadrature on trimmed analytic surfaces. Since analytic surfaces have frequently been used for constructing solid models, this method is also faster to generate quadrature points on real-world geometries than using only NURBS surfaces. To assess the accuracy of the proposed method, we perform simulations of a benchmark problem of flow over a torpedo shape made of analytic surfaces and compare those

  17. Heuristic and Analytic Processing: Age Trends and Associations with Cognitive Ability and Cognitive Styles.

    ERIC Educational Resources Information Center

    Kokis, Judite V.; Macpherson, Robyn; Toplak, Maggie E.; West, Richard F.; Stanovich, Keith E.

    2002-01-01

    Examined developmental and individual differences in tendencies to favor analytic over heuristic responses in three tasks (inductive reasoning, deduction under belief bias conditions, probabilistic reasoning) in children varying in age and cognitive ability. Found significant increases in analytic responding with development on first two tasks.…

  18. Recoding low-level simulator data into a record of meaningful task performance: the integrated task modeling environment (ITME).

    PubMed

    King, Robert; Parker, Simon; Mouzakis, Kon; Fletcher, Winston; Fitzgerald, Patrick

    2007-11-01

    The Integrated Task Modeling Environment (ITME) is a user-friendly software tool that has been developed to automatically recode low-level data into an empirical record of meaningful task performance. The present research investigated and validated the performance of the ITME software package by conducting complex simulation missions and comparing the task analyses produced by ITME with taskanalyses produced by experienced video analysts. A very high interrater reliability (> or = .94) existed between experienced video analysts and the ITME for the task analyses produced for each mission. The mean session time:analysis time ratio was 1:24 using video analysis techniques and 1:5 using the ITME. It was concluded that the ITME produced task analyses that were as reliable as those produced by experienced video analysts, and significantly reduced the time cost associated with these analyses.

  19. Closed-loop, pilot/vehicle analysis of the approach and landing task

    NASA Technical Reports Server (NTRS)

    Anderson, M. R.; Schmidt, D. K.

    1986-01-01

    In the case of approach and landing, it is universally accepted that the pilot uses more than one vehicle response, or output, to close his control loops. Therefore, to model this task, a multi-loop analysis technique is required. The analysis problem has been in obtaining reasonable analytic estimates of the describing functions representing the pilot's loop compensation. Once these pilot describing functions are obtained, appropriate performance and workload metrics must then be developed for the landing task. The optimal control approach provides a powerful technique for obtaining the necessary describing functions, once the appropriate task objective is defined in terms of a quadratic objective function. An approach is presented through the use of a simple, reasonable objective function and model-based metrics to evaluate loop performance and pilot workload. The results of an analysis of the LAHOS (Landing and Approach of Higher Order Systems) study performed by R.E. Smith is also presented.

  20. Molecular modeling of polymer composite-analyte interactions in electronic nose sensors

    NASA Technical Reports Server (NTRS)

    Shevade, A. V.; Ryan, M. A.; Homer, M. L.; Manfreda, A. M.; Zhou, H.; Manatt, K. S.

    2003-01-01

    We report a molecular modeling study to investigate the polymer-carbon black (CB) composite-analyte interactions in resistive sensors. These sensors comprise the JPL electronic nose (ENose) sensing array developed for monitoring breathing air in human habitats. The polymer in the composite is modeled based on its stereoisomerism and sequence isomerism, while the CB is modeled as uncharged naphthalene rings with no hydrogens. The Dreiding 2.21 force field is used for the polymer, solvent molecules and graphite parameters are assigned to the carbon black atoms. A combination of molecular mechanics (MM) and molecular dynamics (NPT-MD and NVT-MD) techniques are used to obtain the equilibrium composite structure by inserting naphthalene rings in the polymer matrix. Polymers considered for this work include poly(4-vinylphenol), polyethylene oxide, and ethyl cellulose. Analytes studied are representative of both inorganic and organic compounds. The results are analyzed for the composite microstructure by calculating the radial distribution profiles as well as for the sensor response by predicting the interaction energies of the analytes with the composites. c2003 Elsevier Science B.V. All rights reserved.

  1. Investigating task inhibition in children versus adults: A diffusion model analysis.

    PubMed

    Schuch, Stefanie; Konrad, Kerstin

    2017-04-01

    One can take n-2 task repetition costs as a measure of inhibition on the level of task sets. When switching back to a Task A after only one intermediate trial (ABA task sequence), Task A is thought to still be inhibited, leading to performance costs relative to task sequences where switching back to Task A is preceded by at least two intermediary trials (CBA). The current study investigated differences in inhibitory ability between children and adults by comparing n-2 task repetition costs in children (9-11years of age, N=32) and young adults (21-30years of age, N=32). The mean reaction times and error rate differences between ABA and CBA sequences did not differ between the two age groups. However, diffusion model analysis revealed that different cognitive processes contribute to the inhibition effect in the two age groups: The adults, but not the children, showed a smaller drift rate in ABA than in CBA, suggesting that persisting task inhibition is associated with slower response selection in adults. In children, non-decision time was longer in ABA than in CBA, possibly reflecting longer task preparation in ABA than in CBA. In addition, Ex-Gaussian functions were fitted to the distributions of correct reaction times. In adults, the ABA-CBA difference was reflected in the exponential parameter of the distribution; in children, the ABA-CBA difference was found in the Gaussian mu parameter. Hence, Ex-Gaussian analysis, although noisier, was generally in line with diffusion model analysis. Taken together, the data suggest that the task inhibition effect found in mean performance is mediated by different cognitive processes in children versus adults. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Improvement of analytical dynamic models using modal test data

    NASA Technical Reports Server (NTRS)

    Berman, A.; Wei, F. S.; Rao, K. V.

    1980-01-01

    A method developed to determine maximum changes in analytical mass and stiffness matrices to make them consistent with a set of measured normal modes and natural frequencies is presented. The corrected model will be an improved base for studies of physical changes, boundary condition changes, and for prediction of forced responses. The method features efficient procedures not requiring solutions of the eigenvalue problem, and the ability to have more degrees of freedom than the test data. In addition, modal displacements are obtained for all analytical degrees of freedom, and the frequency dependence of the coordinate transformations is properly treated.

  3. Collaborative Research and Development (CR&D). Task Order 0049: Tribological Modeling

    DTIC Science & Technology

    2008-05-01

    scratch test for TiN on stainless steel with better substrate mechanical properties. This present study was focused on the study of stress distribution...AFRL-RX-WP-TR-2010-4189 COLLABORATIVE RESEARCH AND DEVELOPMENT (CR&D) Task Order 0049: Tribological Modeling Young Sup Kang Universal...SUBTITLE COLLABORATIVE RESEARCH AND DEVELOPMENT (CR&D) Task Order 0049: Tribological Modeling 5a. CONTRACT NUMBER F33615-03-D-5801-0049 5b

  4. Application of a Curriculum Hierarchy Evaluation (CHE) Model to Sequentially Arranged Tasks.

    ERIC Educational Resources Information Center

    O'Malley, J. Michael

    A curriculum hierarchy evaluation (CHE) model was developed by combining a transfer paradigm with an aptitude-treatment-task interaction (ATTI) paradigm. Positive transfer was predicted between sequentially arranged tasks, and a programed or nonprogramed treatment was predicted to interact with aptitude and with tasks. Eighteen four and five…

  5. Analytical Modeling of Triple-Metal Hetero-Dielectric DG SON TFET

    NASA Astrophysics Data System (ADS)

    Mahajan, Aman; Dash, Dinesh Kumar; Banerjee, Pritha; Sarkar, Subir Kumar

    2018-02-01

    In this paper, a 2-D analytical model of triple-metal hetero-dielectric DG TFET is presented by combining the concepts of triple material gate engineering and hetero-dielectric engineering. Three metals with different work functions are used as both front- and back gate electrodes to modulate the barrier at source/channel and channel/drain interface. In addition to this, front gate dielectric consists of high-K HfO2 at source end and low-K SiO2 at drain side, whereas back gate dielectric is replaced by air to further improve the ON current of the device. Surface potential and electric field of the proposed device are formulated solving 2-D Poisson's equation and Young's approximation. Based on this electric field expression, tunneling current is obtained by using Kane's model. Several device parameters are varied to examine the behavior of the proposed device. The analytical model is validated with TCAD simulation results for proving the accuracy of our proposed model.

  6. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna

    2017-08-01

    Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  7. Assessment Engineering Task Model Maps, Task Models and Templates as a New Way to Develop and Implement Test Specifications

    ERIC Educational Resources Information Center

    Luecht, Richard M.

    2013-01-01

    Assessment engineering is a new way to design and implement scalable, sustainable and ideally lower-cost solutions to the complexities of designing and developing tests. It represents a merger of sorts between cognitive task modeling and engineering design principles--a merger that requires some new thinking about the nature of score scales, item…

  8. Use of a clay modeling task to reduce chocolate craving.

    PubMed

    Andrade, Jackie; Pears, Sally; May, Jon; Kavanagh, David J

    2012-06-01

    Elaborated Intrusion theory (EI theory; Kavanagh, Andrade, & May, 2005) posits two main cognitive components in craving: associative processes that lead to intrusive thoughts about the craved substance or activity, and elaborative processes supporting mental imagery of the substance or activity. We used a novel visuospatial task to test the hypothesis that visual imagery plays a key role in craving. Experiment 1 showed that spending 10 min constructing shapes from modeling clay (plasticine) reduced participants' craving for chocolate compared with spending 10 min 'letting your mind wander'. Increasing the load on verbal working memory using a mental arithmetic task (counting backwards by threes) did not reduce craving further. Experiment 2 compared effects on craving of a simpler verbal task (counting by ones) and clay modeling. Clay modeling reduced overall craving strength and strength of craving imagery, and reduced the frequency of thoughts about chocolate. The results are consistent with EI theory, showing that craving is reduced by loading the visuospatial sketchpad of working memory but not by loading the phonological loop. Clay modeling might be a useful self-help tool to help manage craving for chocolate, snacks and other foods. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Palm: Easing the Burden of Analytical Performance Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Hoisie, Adolfy

    2014-06-01

    Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less

  10. Training Self-Regulated Learning Skills with Video Modeling Examples: Do Task-Selection Skills Transfer?

    ERIC Educational Resources Information Center

    Raaijmakers, Steven F.; Baars, Martine; Schaap, Lydia; Paas, Fred; van Merriënboer, Jeroen; van Gog, Tamara

    2018-01-01

    Self-assessment and task-selection skills are crucial in self-regulated learning situations in which students can choose their own tasks. Prior research suggested that training with video modeling examples, in which another person (the model) demonstrates and explains the cyclical process of problem-solving task performance, self-assessment, and…

  11. Promoting Active Learning by Practicing the "Self-Assembly" of Model Analytical Instruments

    ERIC Educational Resources Information Center

    Algar, W. Russ; Krull, Ulrich J.

    2010-01-01

    In our upper-year instrumental analytical chemistry course, we have developed "cut-and-paste" exercises where students "build" models of analytical instruments from individual schematic images of components. These exercises encourage active learning by students. Instead of trying to memorize diagrams, students are required to think deeply about…

  12. Analytical and experimental study of control effort associated with model reference adaptive control

    NASA Technical Reports Server (NTRS)

    Messer, R. S.; Haftka, R. T.; Cudney, H. H.

    1992-01-01

    Numerical simulation results presently obtained for the performance of model reference adaptive control (MRAC) are experimentally verified, with a view to accounting for differences between the plant and the reference model after the control function has been brought to bear. MRAC is both experimentally and analytically applied to a single-degree-of-freedom system, as well as analytically to a MIMO system having controlled differences between the reference model and the plant. The control effort is noted to be sensitive to differences between the plant and the reference model.

  13. Laser backscattering analytical model of Doppler power spectra about rotating convex quadric bodies of revolution

    NASA Astrophysics Data System (ADS)

    Gong, YanJun; Wu, ZhenSen; Wang, MingJun; Cao, YunHua

    2010-01-01

    We propose an analytical model of Doppler power spectra in backscatter from arbitrary rough convex quadric bodies of revolution (whose lateral surface is a quadric) rotating around axes. In the global Cartesian coordinate system, the analytical model deduced is suitable for general convex quadric body of revolution. Based on this analytical model, the Doppler power spectra of cones, cylinders, paraboloids of revolution, and sphere-cones combination are proposed. We analyze numerically the influence of geometric parameters, aspect angle, wavelength and reflectance of rough surface of the objects on the broadened spectra because of the Doppler effect. This analytical solution may contribute to laser Doppler velocimetry, and remote sensing of ballistic missile that spin.

  14. The super-NFW model: an analytic dynamical model for cold dark matter haloes and elliptical galaxies

    NASA Astrophysics Data System (ADS)

    Lilley, Edward J.; Evans, N. Wyn; Sanders, Jason L.

    2018-05-01

    An analytic galaxy model with ρ ˜ r-1 at small radii and ρ ˜ r-3.5 at large radii is presented. The asymptotic density fall-off is slower than the Hernquist model, but faster than the Navarro-Frenk-White (NFW) profile for dark matter haloes, and so in accord with recent evidence from cosmological simulations. The model provides the zeroth-order term in a biorthornomal basis function expansion, meaning that axisymmetric, triaxial, and lopsided distortions can easily be added (much like the Hernquist model itself which is the zeroth-order term of the Hernquist-Ostriker expansion). The properties of the spherical model, including analytic distribution functions which are either isotropic, radially anisotropic, or tangentially anisotropic, are discussed in some detail. The analogue of the mass-concentration relation for cosmological haloes is provided.

  15. Analytical Model For Fluid Dynamics In A Microgravity Environment

    NASA Technical Reports Server (NTRS)

    Naumann, Robert J.

    1995-01-01

    Report presents analytical approximation methodology for providing coupled fluid-flow, heat, and mass-transfer equations in microgravity environment. Experimental engineering estimates accurate to within factor of 2 made quickly and easily, eliminating need for time-consuming and costly numerical modeling. Any proposed experiment reviewed to see how it would perform in microgravity environment. Model applied in commercial setting for preliminary design of low-Grashoff/Rayleigh-number experiments.

  16. A big data geospatial analytics platform - Physical Analytics Integrated Repository and Services (PAIRS)

    NASA Astrophysics Data System (ADS)

    Hamann, H.; Jimenez Marianno, F.; Klein, L.; Albrecht, C.; Freitag, M.; Hinds, N.; Lu, S.

    2015-12-01

    A big data geospatial analytics platform:Physical Analytics Information Repository and Services (PAIRS)Fernando Marianno, Levente Klein, Siyuan Lu, Conrad Albrecht, Marcus Freitag, Nigel Hinds, Hendrik HamannIBM TJ Watson Research Center, Yorktown Heights, NY 10598A major challenge in leveraging big geospatial data sets is the ability to quickly integrate multiple data sources into physical and statistical models and be run these models in real time. A geospatial data platform called Physical Analytics Information and Services (PAIRS) is developed on top of open source hardware and software stack to manage Terabyte of data. A new data interpolation and re gridding is implemented where any geospatial data layers can be associated with a set of global grid where the grid resolutions is doubling for consecutive layers. Each pixel on the PAIRS grid have an index that is a combination of locations and time stamp. The indexing allow quick access to data sets that are part of a global data layers and allowing to retrieve only the data of interest. PAIRS takes advantages of parallel processing framework (Hadoop) in a cloud environment to digest, curate, and analyze the data sets while being very robust and stable. The data is stored on a distributed no-SQL database (Hbase) across multiple server, data upload and retrieval is parallelized where the original analytics task is broken up is smaller areas/volume, analyzed independently, and then reassembled for the original geographical area. The differentiating aspect of PAIRS is the ability to accelerate model development across large geographical regions and spatial resolution ranging from 0.1 m up to hundreds of kilometer. System performance is benchmarked on real time automated data ingestion and retrieval of Modis and Landsat data layers. The data layers are curated for sensor error, verified for correctness, and analyzed statistically to detect local anomalies. Multi-layer query enable PAIRS to filter different data

  17. Genotype-phenotype association study via new multi-task learning model

    PubMed Central

    Huo, Zhouyuan; Shen, Dinggang

    2018-01-01

    Research on the associations between genetic variations and imaging phenotypes is developing with the advance in high-throughput genotype and brain image techniques. Regression analysis of single nucleotide polymorphisms (SNPs) and imaging measures as quantitative traits (QTs) has been proposed to identify the quantitative trait loci (QTL) via multi-task learning models. Recent studies consider the interlinked structures within SNPs and imaging QTs through group lasso, e.g. ℓ2,1-norm, leading to better predictive results and insights of SNPs. However, group sparsity is not enough for representing the correlation between multiple tasks and ℓ2,1-norm regularization is not robust either. In this paper, we propose a new multi-task learning model to analyze the associations between SNPs and QTs. We suppose that low-rank structure is also beneficial to uncover the correlation between genetic variations and imaging phenotypes. Finally, we conduct regression analysis of SNPs and QTs. Experimental results show that our model is more accurate in prediction than compared methods and presents new insights of SNPs. PMID:29218896

  18. Analytical Solution for the Anisotropic Rabi Model: Effects of Counter-Rotating Terms

    PubMed Central

    Zhang, Guofeng; Zhu, Hanjie

    2015-01-01

    The anisotropic Rabi model, which was proposed recently, differs from the original Rabi model: the rotating and counter-rotating terms are governed by two different coupling constants. This feature allows us to vary the counter-rotating interaction independently and explore the effects of it on some quantum properties. In this paper, we eliminate the counter-rotating terms approximately and obtain the analytical energy spectrums and wavefunctions. These analytical results agree well with the numerical calculations in a wide range of the parameters including the ultrastrong coupling regime. In the weak counter-rotating coupling limit we find out that the counter-rotating terms can be considered as the shifts to the parameters of the Jaynes-Cummings model. This modification shows the validness of the rotating-wave approximation on the assumption of near-resonance and relatively weak coupling. Moreover, the analytical expressions of several physics quantities are also derived, and the results show the break-down of the U(1)-symmetry and the deviation from the Jaynes-Cummings model. PMID:25736827

  19. Analytical Model for Diffusive Evaporation of Sessile Droplets Coupled with Interfacial Cooling Effect.

    PubMed

    Nguyen, Tuan A H; Biggs, Simon R; Nguyen, Anh V

    2018-05-30

    Current analytical models for sessile droplet evaporation do not consider the nonuniform temperature field within the droplet and can overpredict the evaporation by 20%. This deviation can be attributed to a significant temperature drop due to the release of the latent heat of evaporation along the air-liquid interface. We report, for the first time, an analytical solution of the sessile droplet evaporation coupled with this interfacial cooling effect. The two-way coupling model of the quasi-steady thermal diffusion within the droplet and the quasi-steady diffusion-controlled droplet evaporation is conveniently solved in the toroidal coordinate system by applying the method of separation of variables. Our new analytical model for the coupled vapor concentration and temperature fields is in the closed form and is applicable for a full range of spherical-cap shape droplets of different contact angles and types of fluids. Our analytical results are uniquely quantified by a dimensionless evaporative cooling number E o whose magnitude is determined only by the thermophysical properties of the liquid and the atmosphere. Accordingly, the larger the magnitude of E o , the more significant the effect of the evaporative cooling, which results in stronger suppression on the evaporation rate. The classical isothermal model is recovered if the temperature gradient along the air-liquid interface is negligible ( E o = 0). For substrates with very high thermal conductivities (isothermal substrates), our analytical model predicts a reversal of temperature gradient along the droplet-free surface at a contact angle of 119°. Our findings pose interesting challenges but also guidance for experimental investigations.

  20. A physically based analytical spatial air temperature and humidity model

    Treesearch

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2013-01-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...

  1. Visualisation and Analytic Strategies for Anticipating the Folding of Nets

    ERIC Educational Resources Information Center

    Wright, Vince

    2016-01-01

    Visual and analytic strategies are features of students' schemes for spatial tasks. The strategies used by six students to anticipate the folding of nets were investigated. Evidence suggested that visual and analytic strategies were strongly connected in competent performance.

  2. Analytical model of tilted driver–pickup coils for eddy current nondestructive evaluation

    NASA Astrophysics Data System (ADS)

    Cao, Bing-Hua; Li, Chao; Fan, Meng-Bao; Ye, Bo; Tian, Gui-Yun

    2018-03-01

    A driver-pickup probe possesses better sensitivity and flexibility due to individual optimization of a coil. It is frequently observed in an eddy current (EC) array probe. In this work, a tilted non-coaxial driver-pickup probe above a multilayered conducting plate is analytically modeled with spatial transformation for eddy current nondestructive evaluation. Basically, the core of the formulation is to obtain the projection of magnetic vector potential (MVP) from the driver coil onto the vector along the tilted pickup coil, which is divided into two key steps. The first step is to make a projection of MVP along the pickup coil onto a horizontal plane, and the second one is to build the relationship between the projected MVP and the MVP along the driver coil. Afterwards, an analytical model for the case of a layered plate is established with the reflection and transmission theory of electromagnetic fields. The calculated values from the resulting model indicate good agreement with those from the finite element model (FEM) and experiments, which validates the developed analytical model. Project supported by the National Natural Science Foundation of China (Grant Nos. 61701500, 51677187, and 51465024).

  3. Analogue modelling of inclined, brittle-ductile transpression: Testing analytical models through natural shear zones (external Betics)

    NASA Astrophysics Data System (ADS)

    Barcos, L.; Díaz-Azpiroz, M.; Balanyá, J. C.; Expósito, I.; Jiménez-Bonilla, A.; Faccenna, C.

    2016-07-01

    The combination of analytical and analogue models gives new opportunities to better understand the kinematic parameters controlling the evolution of transpression zones. In this work, we carried out a set of analogue models using the kinematic parameters of transpressional deformation obtained by applying a general triclinic transpression analytical model to a tabular-shaped shear zone in the external Betic Chain (Torcal de Antequera massif). According to the results of the analytical model, we used two oblique convergence angles to reproduce the main structural and kinematic features of structural domains observed within the Torcal de Antequera massif (α = 15° for the outer domains and α = 30° for the inner domain). Two parallel inclined backstops (one fixed and the other mobile) reproduce the geometry of the shear zone walls of the natural case. Additionally, we applied digital particle image velocimetry (PIV) method to calculate the velocity field of the incremental deformation. Our results suggest that the spatial distribution of the main structures observed in the Torcal de Antequera massif reflects different modes of strain partitioning and strain localization between two domain types, which are related to the variation in the oblique convergence angle and the presence of steep planar velocity - and rheological - discontinuities (the shear zone walls in the natural case). In the 15° model, strain partitioning is simple and strain localization is high: a single narrow shear zone is developed close and parallel to the fixed backstop, bounded by strike-slip faults and internally deformed by R and P shears. In the 30° model, strain partitioning is strong, generating regularly spaced oblique-to-the backstops thrusts and strike-slip faults. At final stages of the 30° experiment, deformation affects the entire model box. Our results show that the application of analytical modelling to natural transpressive zones related to upper crustal deformation

  4. Executive functioning in preschool children: performance on A-not-B and other delayed response format tasks.

    PubMed

    Espy, K A; Kaufmann, P M; McDiarmid, M D; Glisky, M L

    1999-11-01

    The A-not-B (AB) task has been hypothesized to measure executive/frontal lobe function; however, the developmental and measurement characteristics of this task have not been investigated. Performances on AB and comparison tasks adapted from developmental and neuroscience literature was examined in 117 preschool children (ages 23-66 months). Age significantly predicted performance on AB, Delayed Alternation, Spatial Reversal, Color Reversal, and Self-Control tasks. A four-factor analytic model best fit task performance data. AB task indices loaded on two factors with measures from the Self-Control and Delayed Alternation tasks, respectively. AB indices did not load with those from the reversal tasks despite similarities in task administration and presumed cognitive demand (working memory). These results indicate that AB is sensitive to individual differences in age-related performance in preschool children and suggest that AB performance is related to both working memory and inhibition processes in this age range.

  5. A Demands-Resources Model of Work Pressure in IT Student Task Groups

    ERIC Educational Resources Information Center

    Wilson, E. Vance; Sheetz, Steven D.

    2010-01-01

    This paper presents an initial test of the group task demands-resources (GTD-R) model of group task performance among IT students. We theorize that demands and resources in group work influence formation of perceived group work pressure (GWP) and that heightened levels of GWP inhibit group task performance. A prior study identified 11 factors…

  6. A Meta-Analytic Investigation of Fiedler's Contingency Model of Leadership Effectiveness.

    ERIC Educational Resources Information Center

    Strube, Michael J.; Garcia, Joseph E.

    According to Fiedler's Contingency Model of Leadership Effectiveness, group performance is a function of the leader-situation interaction. A review of past validations has found several problems associated with the model. Meta-analytic techniques were applied to the Contingency Model in order to assess the validation evidence quantitatively. The…

  7. Effect(s) of Language Tasks on Severity of Disfluencies in Preschool Children with Stuttering.

    PubMed

    Zamani, Peyman; Ravanbakhsh, Majid; Weisi, Farzad; Rashedi, Vahid; Naderi, Sara; Hosseinzadeh, Ayub; Rezaei, Mohammad

    2017-04-01

    Speech disfluency in children can be increased or decreased depending on the type of linguistic task presented to them. In this study, the effect of sentence imitation and sentence modeling on severity of speech disfluencies in preschool children with stuttering is investigated. In this cross-sectional descriptive analytical study, 58 children with stuttering (29 with mild stuttering and 29 with moderate stuttering) and 58 typical children aged between 4 and 6 years old participated. The severity of speech disfluencies was assessed by SSI-3 and TOCS before and after offering each task. In boys with mild stuttering, The mean stuttering severity scores in two tasks of sentence imitation and sentence modeling were [Formula: see text] and [Formula: see text] respectively ([Formula: see text]). But, in boys with moderate stuttering the stuttering severity in the both tasks were [Formula: see text] and [Formula: see text] respectively ([Formula: see text]). In girls with mild stuttering, the stuttering severity in two tasks of sentence imitation and sentence modeling were [Formula: see text] and [Formula: see text] respectively ([Formula: see text]). But, in girls with moderate stuttering the mean stuttering severity in the both tasks were [Formula: see text] and [Formula: see text] respectively ([Formula: see text]). In both gender of typical children, the score of speech disfluencies had no significant difference between two tasks ([Formula: see text]). In preschool children with mild stuttering and peer non-stutters, performing the tasks of sentence imitation and sentence modeling could not increase the severity of stuttering. But, in preschool children with moderate stuttering, doing the task of sentence modeling increased the stuttering severity score.

  8. Modeling good research practices--overview: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-1.

    PubMed

    Caro, J Jaime; Briggs, Andrew H; Siebert, Uwe; Kuntz, Karen M

    2012-01-01

    Models-mathematical frameworks that facilitate estimation of the consequences of health care decisions-have become essential tools for health technology assessment. Evolution of the methods since the first ISPOR modeling task force reported in 2003 has led to a new task force, jointly convened with the Society for Medical Decision Making, and this series of seven papers presents the updated recommendations for best practices in conceptualizing models; implementing state-transition approaches, discrete event simulations, or dynamic transmission models; dealing with uncertainty; and validating and reporting models transparently. This overview introduces the work of the task force, provides all the recommendations, and discusses some quandaries that require further elucidation. The audience for these papers includes those who build models, stakeholders who utilize their results, and, indeed, anyone concerned with the use of models to support decision making.

  9. Human task animation from performance models and natural language input

    NASA Technical Reports Server (NTRS)

    Esakov, Jeffrey; Badler, Norman I.; Jung, Moon

    1989-01-01

    Graphical manipulation of human figures is essential for certain types of human factors analyses such as reach, clearance, fit, and view. In many situations, however, the animation of simulated people performing various tasks may be based on more complicated functions involving multiple simultaneous reaches, critical timing, resource availability, and human performance capabilities. One rather effective means for creating such a simulation is through a natural language description of the tasks to be carried out. Given an anthropometrically-sized figure and a geometric workplace environment, various simple actions such as reach, turn, and view can be effectively controlled from language commands or standard NASA checklist procedures. The commands may also be generated by external simulation tools. Task timing is determined from actual performance models, if available, such as strength models or Fitts' Law. The resulting action specification are animated on a Silicon Graphics Iris workstation in real-time.

  10. Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.

    PubMed

    Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina

    2016-08-25

    The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course.

  11. An analytical model for scanning electron microscope Type I magnetic contrast with energy filtering

    NASA Astrophysics Data System (ADS)

    Chim, W. K.

    1994-02-01

    In this article, a theoretical model for type I magnetic contrast calculations in the scanning electron microscope with energy filtering is presented. This model uses an approximate form of the secondary electron (SE) energy distribution by Chung and Everhart [M. S. Chung and T. E. Everhart, J. Appl. Phys. 45, 707 (1974). Closed form analytical expressions for the contrast and quality factors, which take into consideration the work function and field-distance integral of the material being studied, are obtained. This analytical model is compared with that of a more accurate numerical model. Results showed that the contrast and quality factors for the analytical model differed by not more than 20% from the numerical model, with the actual difference depending on the range of filtered SE energies considered. This model has also been extended to the situation of a two-detector (i.e., detector A and B) configuration, in which enhanced magnetic contrast and quality factor can be obtained by operating in the ``A-B'' mode.

  12. Analytical mesoscale modeling of aeolian sand transport

    NASA Astrophysics Data System (ADS)

    Lämmel, Marc; Kroy, Klaus

    2017-11-01

    The mesoscale structure of aeolian sand transport determines a variety of natural phenomena studied in planetary and Earth science. We analyze it theoretically beyond the mean-field level, based on the grain-scale transport kinetics and splash statistics. A coarse-grained analytical model is proposed and verified by numerical simulations resolving individual grain trajectories. The predicted height-resolved sand flux and other important characteristics of the aeolian transport layer agree remarkably well with a comprehensive compilation of field and wind-tunnel data, suggesting that the model robustly captures the essential mesoscale physics. By comparing the predicted saturation length with field data for the minimum sand-dune size, we elucidate the importance of intermittent turbulent wind fluctuations for field measurements and reconcile conflicting previous models for this most enigmatic emergent aeolian scale.

  13. Analytical modeling of circuit aerodynamics in the new NASA Lewis wind tunnel

    NASA Technical Reports Server (NTRS)

    Towne, C. E.; Povinelli, L. A.; Kunik, W. G.; Muramoto, K. K.; Hughes, C. E.; Levy, R.

    1985-01-01

    Rehabilitation and extention of the capability of the altitude wind tunnel (AWT) was analyzed. The analytical modeling program involves the use of advanced axisymmetric and three dimensional viscous analyses to compute the flow through the various AWT components. Results for the analytical modeling of the high speed leg aerodynamics are presented; these include: an evaluation of the flow quality at the entrance to the test section, an investigation of the effects of test section bleed for different model blockages, and an examination of three dimensional effects in the diffuser due to reentry flow and due to the change in cross sectional shape of the exhaust scoop.

  14. Evaluation of one dimensional analytical models for vegetation canopies

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.; Kuusk, Andres

    1992-01-01

    The SAIL model for one-dimensional homogeneous vegetation canopies has been modified to include the specular reflectance and hot spot effects. This modified model and the Nilson-Kuusk model are evaluated by comparing the reflectances given by them against those given by a radiosity-based computer model, Diana, for a set of canopies, characterized by different leaf area index (LAI) and leaf angle distribution (LAD). It is shown that for homogeneous canopies, the analytical models are generally quite accurate in the visible region, but not in the infrared region. For architecturally realistic heterogeneous canopies of the type found in nature, these models fall short. These shortcomings are quantified.

  15. Stochastic model predicts evolving preferences in the Iowa gambling task

    PubMed Central

    Fuentes, Miguel A.; Lavín, Claudio; Contreras-Huerta, L. Sebastián; Miguel, Hernan; Rosales Jubal, Eduardo

    2014-01-01

    Learning under uncertainty is a common task that people face in their daily life. This process relies on the cognitive ability to adjust behavior to environmental demands. Although the biological underpinnings of those cognitive processes have been extensively studied, there has been little work in formal models seeking to capture the fundamental dynamic of learning under uncertainty. In the present work, we aimed to understand the basic cognitive mechanisms of outcome processing involved in decisions under uncertainty and to evaluate the relevance of previous experiences in enhancing learning processes within such uncertain context. We propose a formal model that emulates the behavior of people playing a well established paradigm (Iowa Gambling Task - IGT) and compare its outcome with a behavioral experiment. We further explored whether it was possible to emulate maladaptive behavior observed in clinical samples by modifying the model parameter which controls the update of expected outcomes distributions. Results showed that the performance of the model resembles the observed participant performance as well as IGT performance by healthy subjects described in the literature. Interestingly, the model converges faster than some subjects on the decks with higher net expected outcome. Furthermore, the modified version of the model replicated the trend observed in clinical samples performing the task. We argue that the basic cognitive component underlying learning under uncertainty can be represented as a differential equation that considers the outcomes of previous decisions for guiding the agent to an adaptive strategy. PMID:25566043

  16. Stochastic model predicts evolving preferences in the Iowa gambling task.

    PubMed

    Fuentes, Miguel A; Lavín, Claudio; Contreras-Huerta, L Sebastián; Miguel, Hernan; Rosales Jubal, Eduardo

    2014-01-01

    Learning under uncertainty is a common task that people face in their daily life. This process relies on the cognitive ability to adjust behavior to environmental demands. Although the biological underpinnings of those cognitive processes have been extensively studied, there has been little work in formal models seeking to capture the fundamental dynamic of learning under uncertainty. In the present work, we aimed to understand the basic cognitive mechanisms of outcome processing involved in decisions under uncertainty and to evaluate the relevance of previous experiences in enhancing learning processes within such uncertain context. We propose a formal model that emulates the behavior of people playing a well established paradigm (Iowa Gambling Task - IGT) and compare its outcome with a behavioral experiment. We further explored whether it was possible to emulate maladaptive behavior observed in clinical samples by modifying the model parameter which controls the update of expected outcomes distributions. Results showed that the performance of the model resembles the observed participant performance as well as IGT performance by healthy subjects described in the literature. Interestingly, the model converges faster than some subjects on the decks with higher net expected outcome. Furthermore, the modified version of the model replicated the trend observed in clinical samples performing the task. We argue that the basic cognitive component underlying learning under uncertainty can be represented as a differential equation that considers the outcomes of previous decisions for guiding the agent to an adaptive strategy.

  17. A semi-analytic model of magnetized liner inertial fusion

    DOE PAGES

    McBride, Ryan D.; Slutz, Stephen A.

    2015-05-21

    Presented is a semi-analytic model of magnetized liner inertial fusion (MagLIF). This model accounts for several key aspects of MagLIF, including: (1) preheat of the fuel (optionally via laser absorption); (2) pulsed-power-driven liner implosion; (3) liner compressibility with an analytic equation of state, artificial viscosity, internal magnetic pressure, and ohmic heating; (4) adiabatic compression and heating of the fuel; (5) radiative losses and fuel opacity; (6) magnetic flux compression with Nernst thermoelectric losses; (7) magnetized electron and ion thermal conduction losses; (8) end losses; (9) enhanced losses due to prescribed dopant concentrations and contaminant mix; (10) deuterium-deuterium and deuterium-tritium primarymore » fusion reactions for arbitrary deuterium to tritium fuel ratios; and (11) magnetized α-particle fuel heating. We show that this simplified model, with its transparent and accessible physics, can be used to reproduce the general 1D behavior presented throughout the original MagLIF paper [S. A. Slutz et al., Phys. Plasmas 17, 056303 (2010)]. We also discuss some important physics insights gained as a result of developing this model, such as the dependence of radiative loss rates on the radial fraction of the fuel that is preheated.« less

  18. A semi-analytic model of magnetized liner inertial fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McBride, Ryan D.; Slutz, Stephen A.

    Presented is a semi-analytic model of magnetized liner inertial fusion (MagLIF). This model accounts for several key aspects of MagLIF, including: (1) preheat of the fuel (optionally via laser absorption); (2) pulsed-power-driven liner implosion; (3) liner compressibility with an analytic equation of state, artificial viscosity, internal magnetic pressure, and ohmic heating; (4) adiabatic compression and heating of the fuel; (5) radiative losses and fuel opacity; (6) magnetic flux compression with Nernst thermoelectric losses; (7) magnetized electron and ion thermal conduction losses; (8) end losses; (9) enhanced losses due to prescribed dopant concentrations and contaminant mix; (10) deuterium-deuterium and deuterium-tritium primarymore » fusion reactions for arbitrary deuterium to tritium fuel ratios; and (11) magnetized α-particle fuel heating. We show that this simplified model, with its transparent and accessible physics, can be used to reproduce the general 1D behavior presented throughout the original MagLIF paper [S. A. Slutz et al., Phys. Plasmas 17, 056303 (2010)]. We also discuss some important physics insights gained as a result of developing this model, such as the dependence of radiative loss rates on the radial fraction of the fuel that is preheated.« less

  19. Thermal conductivity of microporous layers: Analytical modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Andisheh-Tadbir, Mehdi; Kjeang, Erik; Bahrami, Majid

    2015-11-01

    A new compact relationship is developed for the thermal conductivity of the microporous layer (MPL) used in polymer electrolyte fuel cells as a function of pore size distribution, porosity, and compression pressure. The proposed model is successfully validated against experimental data obtained from a transient plane source thermal constants analyzer. The thermal conductivities of carbon paper samples with and without MPL were measured as a function of load (1-6 bars) and the MPL thermal conductivity was found between 0.13 and 0.17 W m-1 K-1. The proposed analytical model predicts the experimental thermal conductivities within 5%. A correlation generated from the analytical model was used in a multi objective genetic algorithm to predict the pore size distribution and porosity for an MPL with optimized thermal conductivity and mass diffusivity. The results suggest that an optimized MPL, in terms of heat and mass transfer coefficients, has an average pore size of 122 nm and 63% porosity.

  20. An analytical model for enantioseparation process in capillary electrophoresis

    NASA Astrophysics Data System (ADS)

    Ranzuglia, G. A.; Manzi, S. J.; Gomez, M. R.; Belardinelli, R. E.; Pereyra, V. D.

    2017-12-01

    An analytical model to explain the mobilities of enantiomer binary mixture in capillary electrophoresis experiment is proposed. The model consists in a set of kinetic equations describing the evolution of the populations of molecules involved in the enantioseparation process in capillary electrophoresis (CE) is proposed. These equations take into account the asymmetric driven migration of enantiomer molecules, chiral selector and the temporary diastomeric complexes, which are the products of the reversible reaction between the enantiomers and the chiral selector. The solution of these equations gives the spatial and temporal distribution of each species in the capillary, reproducing a typical signal of the electropherogram. The mobility, μ, of each specie is obtained by the position of the maximum (main peak) of their respective distributions. Thereby, the apparent electrophoretic mobility difference, Δμ, as a function of chiral selector concentration, [ C ] , can be measured. The behaviour of Δμ versus [ C ] is compared with the phenomenological model introduced by Wren and Rowe in J. Chromatography 1992, 603, 235. To test the analytical model, a capillary electrophoresis experiment for the enantiomeric separation of the (±)-chlorpheniramine β-cyclodextrin (β-CD) system is used. These data, as well as, other obtained from literature are in closed agreement with those obtained by the model. All these results are also corroborate by kinetic Monte Carlo simulation.

  1. An analytical model of leakage neutron equivalent dose for passively-scattered proton radiotherapy and validation with measurements.

    PubMed

    Schneider, Christopher; Newhauser, Wayne; Farah, Jad

    2015-05-18

    Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose  at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation.

  2. Anatomy of an Error: A Bidirectional State Model of Task Engagement/Disengagement and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, J. Allan; Solman, Grayden J. F.; Carriere, Jonathan S. A.; Smilek, Daniel

    2009-01-01

    We present arguments and evidence for a three-state attentional model of task engagement/disengagement. The model postulates three states of mind-wandering: occurrent task inattention, generic task inattention, and response disengagement. We hypothesize that all three states are both causes and consequences of task performance outcomes and apply…

  3. An Analytical Hierarchy Process Model for the Evaluation of College Experimental Teaching Quality

    ERIC Educational Resources Information Center

    Yin, Qingli

    2013-01-01

    Taking into account the characteristics of college experimental teaching, through investigaton and analysis, evaluation indices and an Analytical Hierarchy Process (AHP) model of experimental teaching quality have been established following the analytical hierarchy process method, and the evaluation indices have been given reasonable weights. An…

  4. Analytical model for fast reconnection in large guide field plasma configurations

    NASA Astrophysics Data System (ADS)

    Simakov, A. N.; Chacón, L.; Grasso, D.; Borgogno, D.; Zocco, A.

    2009-11-01

    Significant progress in understanding magnetic reconnection without a guide field was made recently by deriving quantitatively accurate analytical models for reconnection in electron [1] and Hall [2] MHD. However, no such analytical model is available for reconnection with a guide field. Here, we derive such an analytical model for the large-guide-field, low-β, cold-ion fluid model [3] with electron inertia, ion viscosity μ, and resistivity η. We find that the reconnection is Sweet-Parker-like when the Sweet-Parker layer thickness δSP> (ρs^4 + de^4)^1/4, with ρs and de the sound Larmor radius and electron inertial length. However, reconnection changes character otherwise, resulting in reconnection rates Ez/Bx^2 √2 η/μ (ρs^2 + de^2)/(ρsw) with Bx the upstream magnetic field and w the diffusion region length. Unlike the zero-guide-field case, μ plays crucial role in manifesting fast reconnection rates. If it represents the perpendicular viscosity [3], √η/μ ˜&-1circ;√(me/mi)(Ti/Te) and Ez becomes dissipation independent and therefore potentially fast.[0pt] [1] L. Chac'on, A. N. Simakov, and A. Zocco, PRL 99, 235001 (2007).[0pt] [2] A. N. Simakov and L. Chac'on, PRL 101, 105003 (2008).[0pt] [3] D. Biskamp, Magnetic reconnection in plasmas, Cambridge University Press, 2000.

  5. Numerical modeling and analytical modeling of cryogenic carbon capture in a de-sublimating heat exchanger

    NASA Astrophysics Data System (ADS)

    Yu, Zhitao; Miller, Franklin; Pfotenhauer, John M.

    2017-12-01

    Both a numerical and analytical model of the heat and mass transfer processes in a CO2, N2 mixture gas de-sublimating cross-flow finned duct heat exchanger system is developed to predict the heat transferred from a mixture gas to liquid nitrogen and the de-sublimating rate of CO2 in the mixture gas. The mixture gas outlet temperature, liquid nitrogen outlet temperature, CO2 mole fraction, temperature distribution and de-sublimating rate of CO2 through the whole heat exchanger was computed using both the numerical and analytic model. The numerical model is built using EES [1] (engineering equation solver). According to the simulation, a cross-flow finned duct heat exchanger can be designed and fabricated to validate the models. The performance of the heat exchanger is evaluated as functions of dimensionless variables, such as the ratio of the mass flow rate of liquid nitrogen to the mass flow rate of inlet flue gas.

  6. Meta-Analytic Structural Equation Modeling (MASEM): Comparison of the Multivariate Methods

    ERIC Educational Resources Information Center

    Zhang, Ying

    2011-01-01

    Meta-analytic Structural Equation Modeling (MASEM) has drawn interest from many researchers recently. In doing MASEM, researchers usually first synthesize correlation matrices across studies using meta-analysis techniques and then analyze the pooled correlation matrix using structural equation modeling techniques. Several multivariate methods of…

  7. An analytical model of flagellate hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dölger, Julia; Bohr, Tomas; Andersen, Anders

    2017-04-01

    Flagellates are unicellular microswimmers that propel themselves using one or several beating flagella. We consider a hydrodynamic model of flagellates and explore the effect of flagellar arrangement and beat pattern on swimming kinematics and near-cell flow. The model is based on the analytical solution by Oseen for the low Reynolds number flow due to a point force outside a no-slip sphere. The no-slip sphere represents the cell and the point force a single flagellum. By superposition we are able to model a freely swimming flagellate with several flagella. For biflagellates with left-right symmetric flagellar arrangements we determine the swimming velocity, and we show that transversal forces due to the periodic movements of the flagella can promote swimming. For a model flagellate with both a longitudinal and a transversal flagellum we determine radius and pitch of the helical swimming trajectory. We find that the longitudinal flagellum is responsible for the average translational motion whereas the transversal flagellum governs the rotational motion. Finally, we show that the transversal flagellum can lead to strong feeding currents to localized capture sites on the cell surface.

  8. Analytical modelling of temperature effects on an AMPA-type synapse.

    PubMed

    Kufel, Dominik S; Wojcik, Grzegorz M

    2018-05-11

    It was previously reported, that temperature may significantly influence neural dynamics on the different levels of brain function. Thus, in computational neuroscience, it would be useful to make models scalable for a wide range of various brain temperatures. However, lack of experimental data and an absence of temperature-dependent analytical models of synaptic conductance does not allow to include temperature effects at the multi-neuron modeling level. In this paper, we propose a first step to deal with this problem: A new analytical model of AMPA-type synaptic conductance, which is able to incorporate temperature effects in low-frequency stimulations. It was constructed based on Markov model description of AMPA receptor kinetics using the set of coupled ODEs. The closed-form solution for the set of differential equations was found using uncoupling assumption (introduced in the paper) with few simplifications motivated both from experimental data and from Monte Carlo simulation of synaptic transmission. The model may be used for computationally efficient and biologically accurate implementation of temperature effects on AMPA receptor conductance in large-scale neural network simulations. As a result, it may open a wide range of new possibilities for researching the influence of temperature on certain aspects of brain functioning.

  9. Implementing a Matrix-free Analytical Jacobian to Handle Nonlinearities in Models of 3D Lithospheric Deformation

    NASA Astrophysics Data System (ADS)

    Kaus, B.; Popov, A.

    2015-12-01

    The analytical expression for the Jacobian is a key component to achieve fast and robust convergence of the nonlinear Newton-Raphson iterative solver. Accomplishing this task in practice often requires a significant algebraic effort. Therefore it is quite common to use a cheap alternative instead, for example by approximating the Jacobian with a finite difference estimation. Despite its simplicity it is a relatively fragile and unreliable technique that is sensitive to the scaling of the residual and unknowns, as well as to the perturbation parameter selection. Unfortunately no universal rule can be applied to provide both a robust scaling and a perturbation. The approach we use here is to derive the analytical Jacobian for the coupled set of momentum, mass, and energy conservation equations together with the elasto-visco-plastic rheology and a marker in cell/staggered finite difference method. The software project LaMEM (Lithosphere and Mantle Evolution Model) is primarily developed for the thermo-mechanically coupled modeling of the 3D lithospheric deformation. The code is based on a staggered grid finite difference discretization in space, and uses customized scalable solvers form PETSc library to efficiently run on the massively parallel machines (such as IBM Blue Gene/Q). Currently LaMEM relies on the Jacobian-Free Newton-Krylov (JFNK) nonlinear solver, which approximates the Jacobian-vector product using a simple finite difference formula. This approach never requires an assembled Jacobian matrix and uses only the residual computation routine. We use an approximate Jacobian (Picard) matrix to precondition the Krylov solver with the Galerkin geometric multigrid. Because of the inherent problems of the finite difference Jacobian estimation, this approach doesn't always result in stable convergence. In this work we present and discuss a matrix-free technique in which the Jacobian-vector product is replaced by analytically-derived expressions and compare results

  10. Approximate analytic solutions to 3D unconfined groundwater flow within regional 2D models

    NASA Astrophysics Data System (ADS)

    Luther, K.; Haitjema, H. M.

    2000-04-01

    We present methods for finding approximate analytic solutions to three-dimensional (3D) unconfined steady state groundwater flow near partially penetrating and horizontal wells, and for combining those solutions with regional two-dimensional (2D) models. The 3D solutions use distributed singularities (analytic elements) to enforce boundary conditions on the phreatic surface and seepage faces at vertical wells, and to maintain fixed-head boundary conditions, obtained from the 2D model, at the perimeter of the 3D model. The approximate 3D solutions are analytic (continuous and differentiable) everywhere, including on the phreatic surface itself. While continuity of flow is satisfied exactly in the infinite 3D flow domain, water balance errors can occur across the phreatic surface.

  11. Analytic model of a multi-electron atom

    NASA Astrophysics Data System (ADS)

    Skoromnik, O. D.; Feranchuk, I. D.; Leonau, A. U.; Keitel, C. H.

    2017-12-01

    A fully analytical approximation for the observable characteristics of many-electron atoms is developed via a complete and orthonormal hydrogen-like basis with a single-effective charge parameter for all electrons of a given atom. The basis completeness allows us to employ the secondary-quantized representation for the construction of regular perturbation theory, which includes in a natural way correlation effects, converges fast and enables an effective calculation of the subsequent corrections. The hydrogen-like basis set provides a possibility to perform all summations over intermediate states in closed form, including both the discrete and continuous spectra. This is achieved with the help of the decomposition of the multi-particle Green function in a convolution of single-electronic Coulomb Green functions. We demonstrate that our fully analytical zeroth-order approximation describes the whole spectrum of the system, provides accuracy, which is independent of the number of electrons and is important for applications where the Thomas-Fermi model is still utilized. In addition already in second-order perturbation theory our results become comparable with those via a multi-configuration Hartree-Fock approach.

  12. Modeling good research practices--overview: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force--1.

    PubMed

    Caro, J Jaime; Briggs, Andrew H; Siebert, Uwe; Kuntz, Karen M

    2012-01-01

    Models--mathematical frameworks that facilitate estimation of the consequences of health care decisions--have become essential tools for health technology assessment. Evolution of the methods since the first ISPOR Modeling Task Force reported in 2003 has led to a new Task Force, jointly convened with the Society for Medical Decision Making, and this series of seven articles presents the updated recommendations for best practices in conceptualizing models; implementing state-transition approaches, discrete event simulations, or dynamic transmission models; and dealing with uncertainty and validating and reporting models transparently. This overview article introduces the work of the Task Force, provides all the recommendations, and discusses some quandaries that require further elucidation. The audience for these articles includes those who build models, stakeholders who utilize their results, and, indeed, anyone concerned with the use of models to support decision making. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  13. Diffusion models of the flanker task: Discrete versus gradual attentional selection

    PubMed Central

    White, Corey N.; Ratcliff, Roger; Starns, Jeffrey S.

    2011-01-01

    The present study tested diffusion models of processing in the flanker task, in which participants identify a target that is flanked by items that indicate the same (congruent) or opposite response (incongruent). Single- and dual-process flanker models were implemented in a diffusion-model framework and tested against data from experiments that manipulated response bias, speed/accuracy tradeoffs, attentional focus, and stimulus configuration. There was strong mimcry among the models, and each captured the main trends in the data for the standard conditions. However, when more complex conditions were used, a single-process spotlight model captured qualitative and quantitative patterns that the dual-process models could not. Since the single-process model provided the best balance of fit quality and parsimony, the results indicate that processing in the simple versions of the flanker task is better described by gradual rather than discrete narrowing of attention. PMID:21964663

  14. The Use Of Computational Human Performance Modeling As Task Analysis Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacuqes Hugo; David Gertman

    2012-07-01

    During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employedmore » to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.« less

  15. Analytical modeling of flash-back phenomena. [premixed/prevaporized combustion system

    NASA Technical Reports Server (NTRS)

    Feng, C. C.

    1979-01-01

    To understand the flame flash-back phenomena more extensively, an analytical model was formed and a numerical program was written and tested to solve the set of differential equations describing the model. Results show that under a given set of conditions flame propagates in the boundary layer on a flat plate when the free stream is at or below 1.8 m/s.

  16. ADRA2B Deletion Variant and Enhanced Cognitive Processing of Emotional Information: A Meta-Analytical Review.

    PubMed

    Xie, Weizhen; Cappiello, Marcus; Meng, Ming; Rosenthal, Robert; Zhang, Weiwei

    2018-05-08

    This meta-analytical review examines whether a deletion variant in ADRA2B, a gene that encodes α 2B adrenoceptor in the regulation of norepinephrine availability, influences cognitive processing of emotional information in human observers. Using a multilevel modeling approach, this meta-analysis of 16 published studies with a total of 2,752 participants showed that ADRA2B deletion variant was significantly associated with enhanced perceptual and cognitive task performance for emotional stimuli. In contrast, this genetic effect did not manifest in overall task performance when non-emotional content was used. Furthermore, various study-level factors, such as targeted cognitive processes (memory vs. attention/perception) and task procedures (recall vs. recognition), could moderate the size of this genetic effect. Overall, with increased statistical power and standardized analytical procedures, this meta-analysis has established the contributions of ADRA2B to the interactions between emotion and cognition, adding to the growing literature on individual differences in attention, perception, and memory for emotional information in the general population. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  18. An Analytic Hierarchy Process for School Quality and Inspection: Model Development and Application

    ERIC Educational Resources Information Center

    Al Qubaisi, Amal; Badri, Masood; Mohaidat, Jihad; Al Dhaheri, Hamad; Yang, Guang; Al Rashedi, Asma; Greer, Kenneth

    2016-01-01

    Purpose: The purpose of this paper is to develop an analytic hierarchy planning-based framework to establish criteria weights and to develop a school performance system commonly called school inspections. Design/methodology/approach: The analytic hierarchy process (AHP) model uses pairwise comparisons and a measurement scale to generate the…

  19. An analytic model for buoyancy resonances in protoplanetary disks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubow, Stephen H.; Zhu, Zhaohuan, E-mail: lubow@stsci.edu, E-mail: zhzhu@astro.princeton.edu

    2014-04-10

    Zhu et al. found in three-dimensional shearing box simulations a new form of planet-disk interaction that they attributed to a vertical buoyancy resonance in the disk. We describe an analytic linear model for this interaction. We adopt a simplified model involving azimuthal forcing that produces the resonance and permits an analytic description of its structure. We derive an analytic expression for the buoyancy torque and show that the vertical torque distribution agrees well with the results of the Athena simulations and a Fourier method for linear numerical calculations carried out with the same forcing. The buoyancy resonance differs from themore » classic Lindblad and corotation resonances in that the resonance lies along tilted planes. Its width depends on damping effects and is independent of the gas sound speed. The resonance does not excite propagating waves. At a given large azimuthal wavenumber k{sub y} > h {sup –1} (for disk thickness h), the buoyancy resonance exerts a torque over a region that lies radially closer to the corotation radius than the Lindblad resonance. Because the torque is localized to the region of excitation, it is potentially subject to the effects of nonlinear saturation. In addition, the torque can be reduced by the effects of radiative heat transfer between the resonant region and its surroundings. For each azimuthal wavenumber, the resonance establishes a large scale density wave pattern in a plane within the disk.« less

  20. An Analytical Thermal Model for Autonomous Soaring Research

    NASA Technical Reports Server (NTRS)

    Allen, Michael

    2006-01-01

    A viewgraph presentation describing an analytical thermal model used to enable research on autonomous soaring for a small UAV aircraft is given. The topics include: 1) Purpose; 2) Approach; 3) SURFRAD Data; 4) Convective Layer Thickness; 5) Surface Heat Budget; 6) Surface Virtual Potential Temperature Flux; 7) Convective Scaling Velocity; 8) Other Calculations; 9) Yearly trends; 10) Scale Factors; 11) Scale Factor Test Matrix; 12) Statistical Model; 13) Updraft Strength Calculation; 14) Updraft Diameter; 15) Updraft Shape; 16) Smoothed Updraft Shape; 17) Updraft Spacing; 18) Environment Sink; 19) Updraft Lifespan; 20) Autonomous Soaring Research; 21) Planned Flight Test; and 22) Mixing Ratio.

  1. Fitting Meta-Analytic Structural Equation Models with Complex Datasets

    ERIC Educational Resources Information Center

    Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.

    2016-01-01

    A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…

  2. Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.

    PubMed

    Islam, R; Weir, C; Del Fiol, G

    2016-01-01

    Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.

  3. Residential Saudi load forecasting using analytical model and Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Al-Harbi, Ahmad Abdulaziz

    In recent years, load forecasting has become one of the main fields of study and research. Short Term Load Forecasting (STLF) is an important part of electrical power system operation and planning. This work investigates the applicability of different approaches; Artificial Neural Networks (ANNs) and hybrid analytical models to forecast residential load in Kingdom of Saudi Arabia (KSA). These two techniques are based on model human modes behavior formulation. These human modes represent social, religious, official occasions and environmental parameters impact. The analysis is carried out on residential areas for three regions in two countries exposed to distinct people activities and weather conditions. The collected data are for Al-Khubar and Yanbu industrial city in KSA, in addition to Seattle, USA to show the validity of the proposed models applied on residential load. For each region, two models are proposed. First model is next hour load forecasting while second model is next day load forecasting. Both models are analyzed using the two techniques. The obtained results for ANN next hour models yield very accurate results for all areas while relatively reasonable results are achieved when using hybrid analytical model. For next day load forecasting, the two approaches yield satisfactory results. Comparative studies were conducted to prove the effectiveness of the models proposed.

  4. The role of decision analytic modeling in the health economic assessment of spinal intervention.

    PubMed

    Edwards, Natalie C; Skelly, Andrea C; Ziewacz, John E; Cahill, Kevin; McGirt, Matthew J

    2014-10-15

    Narrative review. To review the common tenets, strengths, and weaknesses of decision modeling for health economic assessment and to review the use of decision modeling in the spine literature to date. For the majority of spinal interventions, well-designed prospective, randomized, pragmatic cost-effectiveness studies that address the specific decision-in-need are lacking. Decision analytic modeling allows for the estimation of cost-effectiveness based on data available to date. Given the rising demands for proven value in spine care, the use of decision analytic modeling is rapidly increasing by clinicians and policy makers. This narrative review discusses the general components of decision analytic models, how decision analytic models are populated and the trade-offs entailed, makes recommendations for how users of spine intervention decision models might go about appraising the models, and presents an overview of published spine economic models. A proper, integrated, clinical, and economic critical appraisal is necessary in the evaluation of the strength of evidence provided by a modeling evaluation. As is the case with clinical research, all options for collecting health economic or value data are not without their limitations and flaws. There is substantial heterogeneity across the 20 spine intervention health economic modeling studies summarized with respect to study design, models used, reporting, and general quality. There is sparse evidence for populating spine intervention models. Results mostly showed that interventions were cost-effective based on $100,000/quality-adjusted life-year threshold. Spine care providers, as partners with their health economic colleagues, have unique clinical expertise and perspectives that are critical to interpret the strengths and weaknesses of health economic models. Health economic models must be critically appraised for both clinical validity and economic quality before altering health care policy, payment strategies, or

  5. Unexpected Dual Task Benefits on Cycling in Parkinson Disease and Healthy Adults: A Neuro-Behavioral Model

    PubMed Central

    Altmann, Lori J. P.; Stegemöller, Elizabeth; Hazamy, Audrey A.; Wilson, Jonathan P.; Okun, Michael S.; McFarland, Nikolaus R.; Shukla, Aparna Wagle; Hass, Chris J.

    2015-01-01

    Background When performing two tasks at once, a dual task, performance on one or both tasks typically suffers. People with Parkinson’s disease (PD) usually experience larger dual task decrements on motor tasks than healthy older adults (HOA). Our objective was to investigate the decrements in cycling caused by performing cognitive tasks with a range of difficulty in people with PD and HOAs. Methods Twenty-eight participants with Parkinson’s disease and 20 healthy older adults completed a baseline cycling task with no secondary tasks and then completed dual task cycling while performing 12 tasks from six cognitive domains representing a wide range of difficulty. Results Cycling was faster during dual task conditions than at baseline, and was significantly faster for six tasks (all p<.02) across both groups. Cycling speed improved the most during the easiest cognitive tasks, and cognitive performance was largely unaffected. Cycling improvement was predicted by task difficulty (p<.001). People with Parkinson’s disease cycled slower (p<.03) and showed reduced dual task benefits (p<.01) than healthy older adults. Conclusions Unexpectedly, participants’ motor performance improved during cognitive dual tasks, which cannot be explained in current models of dual task performance. To account for these findings, we propose a model integrating dual task and acute exercise approaches which posits that cognitive arousal during dual tasks increases resources to facilitate motor and cognitive performance, which is subsequently modulated by motor and cognitive task difficulty. This model can explain both the improvement observed on dual tasks in the current study and more typical dual task findings in other studies. PMID:25970607

  6. The Effect of Task Duration on Event-Based Prospective Memory: A Multinomial Modeling Approach

    PubMed Central

    Zhang, Hongxia; Tang, Weihai; Liu, Xiping

    2017-01-01

    Remembering to perform an action when a specific event occurs is referred to as Event-Based Prospective Memory (EBPM). This study investigated how EBPM performance is affected by task duration by having university students (n = 223) perform an EBPM task that was embedded within an ongoing computer-based color-matching task. For this experiment, we separated the overall task’s duration into the filler task duration and the ongoing task duration. The filler task duration is the length of time between the intention and the beginning of the ongoing task, and the ongoing task duration is the length of time between the beginning of the ongoing task and the appearance of the first Prospective Memory (PM) cue. The filler task duration and ongoing task duration were further divided into three levels: 3, 6, and 9 min. Two factors were then orthogonally manipulated between-subjects using a multinomial processing tree model to separate the effects of different task durations on the two EBPM components. A mediation model was then created to verify whether task duration influences EBPM via self-reminding or discrimination. The results reveal three points. (1) Lengthening the duration of ongoing tasks had a negative effect on EBPM performance while lengthening the duration of the filler task had no significant effect on it. (2) As the filler task was lengthened, both the prospective and retrospective components show a decreasing and then increasing trend. Also, when the ongoing task duration was lengthened, the prospective component decreased while the retrospective component significantly increased. (3) The mediating effect of discrimination between the task duration and EBPM performance was significant. We concluded that different task durations influence EBPM performance through different components with discrimination being the mediator between task duration and EBPM performance. PMID:29163277

  7. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi

    2015-09-02

    This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared tomore » finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.« less

  8. An efficient analytical model for baffled, multi-celled membrane-type acoustic metamaterial panels

    NASA Astrophysics Data System (ADS)

    Langfeldt, F.; Gleine, W.; von Estorff, O.

    2018-03-01

    A new analytical model for the oblique incidence sound transmission loss prediction of baffled panels with multiple subwavelength sized membrane-type acoustic metamaterial (MAM) unit cells is proposed. The model employs a novel approach via the concept of the effective surface mass density and approximates the unit cell vibrations in the form of piston-like displacements. This yields a coupled system of linear equations that can be solved efficiently using well-known solution procedures. A comparison with results from finite element model simulations for both normal and diffuse field incidence shows that the analytical model delivers accurate results as long as the edge length of the MAM unit cells is smaller than half the acoustic wavelength. The computation times for the analytical calculations are 100 times smaller than for the numerical simulations. In addition to that, the effect of flexible MAM unit cell edges compared to the fixed edges assumed in the analytical model is studied numerically. It is shown that the compliance of the edges has only a small impact on the transmission loss of the panel, except at very low frequencies in the stiffness-controlled regime. The proposed analytical model is applied to investigate the effect of variations of the membrane prestress, added mass, and mass eccentricity on the diffuse transmission loss of a MAM panel with 120 unit cells. Unlike most previous investigations of MAMs, these results provide a better understanding of the acoustic performance of MAMs under more realistic conditions. For example, it is shown that by varying these parameters deliberately in a checkerboard pattern, a new anti-resonance with large transmission loss values can be introduced. A random variation of these parameters, on the other hand, is shown to have only little influence on the diffuse transmission loss, as long as the standard deviation is not too large. For very large random variations, it is shown that the peak transmission loss

  9. Applying dynamic priority scheduling scheme to static systems of pinwheel task model in power-aware scheduling.

    PubMed

    Seol, Ye-In; Kim, Young-Kuk

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10-80% over the existing algorithms.

  10. Applying Dynamic Priority Scheduling Scheme to Static Systems of Pinwheel Task Model in Power-Aware Scheduling

    PubMed Central

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10–80% over the existing algorithms. PMID:25121126

  11. Anatomy of an error: a bidirectional state model of task engagement/disengagement and attention-related errors.

    PubMed

    Allan Cheyne, J; Solman, Grayden J F; Carriere, Jonathan S A; Smilek, Daniel

    2009-04-01

    We present arguments and evidence for a three-state attentional model of task engagement/disengagement. The model postulates three states of mind-wandering: occurrent task inattention, generic task inattention, and response disengagement. We hypothesize that all three states are both causes and consequences of task performance outcomes and apply across a variety of experimental and real-world tasks. We apply this model to the analysis of a widely used GO/NOGO task, the Sustained Attention to Response Task (SART). We identify three performance characteristics of the SART that map onto the three states of the model: RT variability, anticipations, and omissions. Predictions based on the model are tested, and largely corroborated, via regression and lag-sequential analyses of both successful and unsuccessful withholding on NOGO trials as well as self-reported mind-wandering and everyday cognitive errors. The results revealed theoretically consistent temporal associations among the state indicators and between these and SART errors as well as with self-report measures. Lag analysis was consistent with the hypotheses that temporal transitions among states are often extremely abrupt and that the association between mind-wandering and performance is bidirectional. The bidirectional effects suggest that errors constitute important occasions for reactive mind-wandering. The model also enables concrete phenomenological, behavioral, and physiological predictions for future research.

  12. Goals and Characteristics of Long-Term Care Programs: An Analytic Model.

    ERIC Educational Resources Information Center

    Braun, Kathryn L.; Rose, Charles L.

    1989-01-01

    Used medico-social analytic model to compare five long-term care programs: Skilled Nursing Facility-Intermediate Care Facility (SNF-ICF) homes, ICF homes, foster homes, day hospitals, and home care. Identified similarities and differences among programs. Preliminary findings suggest that model is useful in the evaluation and design of long-term…

  13. Analytical ground state for the Jaynes-Cummings model with ultrastrong coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yuanwei; Institute of Theoretical Physics, Shanxi University, Taiyuan 030006; Chen Gang

    2011-06-15

    We present a generalized variational method to analytically obtain the ground-state properties of the Jaynes-Cummings model with the ultrastrong coupling. An explicit expression for the ground-state energy, which agrees well with the numerical simulation in a wide range of the experimental parameters, is given. In particular, the introduced method can successfully solve this Jaynes-Cummings model with the positive detuning (the atomic resonant level is larger than the photon frequency), which cannot be treated in the adiabatical approximation and the generalized rotating-wave approximation. Finally, we also demonstrate analytically how to control the mean photon number by means of the current experimentalmore » parameters including the photon frequency, the coupling strength, and especially the atomic resonant level.« less

  14. Approximate analytical modeling of leptospirosis infection

    NASA Astrophysics Data System (ADS)

    Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani

    2017-11-01

    Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.

  15. Simplified Analytical Model of a Six-Degree-of-Freedom Large-Gap Magnetic Suspension System

    NASA Technical Reports Server (NTRS)

    Groom, Nelson J.

    1997-01-01

    A simplified analytical model of a six-degree-of-freedom large-gap magnetic suspension system is presented. The suspended element is a cylindrical permanent magnet that is magnetized in a direction which is perpendicular to its axis of symmetry. The actuators are air core electromagnets mounted in a planar array. The analytical model consists of an open-loop representation of the magnetic suspension system with electromagnet currents as inputs.

  16. Modeling aging effects on two-choice tasks: response signal and response time data.

    PubMed

    Ratcliff, Roger

    2008-12-01

    In the response signal paradigm, a test stimulus is presented, and then at one of a number of experimenter-determined times, a signal to respond is presented. Response signal, standard response time (RT), and accuracy data were collected from 19 college-age and 19 60- to 75-year-old participants in a numerosity discrimination task. The data were fit with 2 versions of the diffusion model. Response signal data were modeled by assuming a mixture of processes, those that have terminated before the signal and those that have not terminated; in the latter case, decisions are based on either partial information or guessing. The effects of aging on performance in the regular RT task were explained the same way in the models, with a 70- to 100-ms increase in the nondecision component of processing, more conservative decision criteria, and more variability across trials in drift and the nondecision component of processing, but little difference in drift rate (evidence). In the response signal task, the primary reason for a slower rise in the response signal functions for older participants was variability in the nondecision component of processing. Overall, the results were consistent with earlier fits of the diffusion model to the standard RT task for college-age participants and to the data from aging studies using this task in the standard RT procedure. Copyright (c) 2009 APA, all rights reserved.

  17. Modeling Aging Effects on Two-Choice Tasks: Response Signal and Response Time Data

    PubMed Central

    Ratcliff, Roger

    2009-01-01

    In the response signal paradigm, a test stimulus is presented, and then at one of a number of experimenter-determined times, a signal to respond is presented. Response signal, standard response time (RT), and accuracy data were collected from 19 college-age and 19 60- to 75-year-old participants in a numerosity discrimination task. The data were fit with 2 versions of the diffusion model. Response signal data were modeled by assuming a mixture of processes, those that have terminated before the signal and those that have not terminated; in the latter case, decisions are based on either partial information or guessing. The effects of aging on performance in the regular RT task were explained the same way in the models, with a 70- to 100-ms increase in the nondecision component of processing, more conservative decision criteria, and more variability across trials in drift and the nondecision component of processing, but little difference in drift rate (evidence). In the response signal task, the primary reason for a slower rise in the response signal functions for older participants was variability in the nondecision component of processing. Overall, the results were consistent with earlier fits of the diffusion model to the standard RT task for college-age participants and to the data from aging studies using this task in the standard RT procedure. PMID:19140659

  18. The Influence of Feedback on Task-Switching Performance: A Drift Diffusion Modeling Account.

    PubMed

    Cohen Hoffing, Russell; Karvelis, Povilas; Rupprechter, Samuel; Seriès, Peggy; Seitz, Aaron R

    2018-01-01

    Task-switching is an important cognitive skill that facilitates our ability to choose appropriate behavior in a varied and changing environment. Task-switching training studies have sought to improve this ability by practicing switching between multiple tasks. However, an efficacious training paradigm has been difficult to develop in part due to findings that small differences in task parameters influence switching behavior in a non-trivial manner. Here, for the first time we employ the Drift Diffusion Model (DDM) to understand the influence of feedback on task-switching and investigate how drift diffusion parameters change over the course of task switch training. We trained 316 participants on a simple task where they alternated sorting stimuli by color or by shape. Feedback differed in six different ways between subjects groups, ranging from No Feedback (NFB) to a variety of manipulations addressing trial-wise vs. Block Feedback (BFB), rewards vs. punishments, payment bonuses and different payouts depending upon the trial type (switch/non-switch). While overall performance was found to be affected by feedback, no effect of feedback was found on task-switching learning. Drift Diffusion Modeling revealed that the reductions in reaction time (RT) switch cost over the course of training were driven by a continually decreasing decision boundary. Furthermore, feedback effects on RT switch cost were also driven by differences in decision boundary, but not in drift rate. These results reveal that participants systematically modified their task-switching performance without yielding an overall gain in performance.

  19. WHAEM: PROGRAM DOCUMENTATION FOR THE WELLHEAD ANALYTIC ELEMENT MODEL (EPA/600/SR-94/210)

    EPA Science Inventory

    A new computer program has been developed to determine time-of-travel capture zones in relatively simple geohydrological settings. The WhAEM package contains an analytic element model that uses superposition of (many) closed form analytical solutions to generate a groundwater flo...

  20. FACTOR ANALYTIC MODELS OF CLUSTERED MULTIVARIATE DATA WITH INFORMATIVE CENSORING

    EPA Science Inventory

    This paper describes a general class of factor analytic models for the analysis of clustered multivariate data in the presence of informative missingness. We assume that there are distinct sets of cluster-level latent variables related to the primary outcomes and to the censorin...

  1. Do we always prioritize balance when walking? Towards an integrated model of task prioritization.

    PubMed

    Yogev-Seligmann, Galit; Hausdorff, Jeffrey M; Giladi, Nir

    2012-05-01

    Previous studies suggest that strategies such as "posture first" are implicitly employed to regulate safety when healthy adults walk while simultaneously performing another task, whereas "posture second" may be inappropriately applied in the presence of neurological disease. However, recent understandings raise questions about the traditional resource allocation concept during walking while dual tasking. We propose a task prioritization model of walking while dual tasking that integrates motor and cognitive capabilities, focusing on postural reserve, hazard estimation, and other individual intrinsic factors. The proposed prioritization model provides a theoretical foundation for future studies and a framework for the development of interventions designed to reduce the profound negative impacts of dual tasking on gait and fall risk in patients with neurological diseases. © 2012 Movement Disorder Society. Copyright © 2012 Movement Disorder Society.

  2. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solversmore » that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.« less

  3. Analytical modeling of electron energy loss spectroscopy of graphene: Ab initio study versus extended hydrodynamic model.

    PubMed

    Djordjević, Tijana; Radović, Ivan; Despoja, Vito; Lyon, Keenan; Borka, Duško; Mišković, Zoran L

    2018-01-01

    We present an analytical modeling of the electron energy loss (EEL) spectroscopy data for free-standing graphene obtained by scanning transmission electron microscope. The probability density for energy loss of fast electrons traversing graphene under normal incidence is evaluated using an optical approximation based on the conductivity of graphene given in the local, i.e., frequency-dependent form derived by both a two-dimensional, two-fluid extended hydrodynamic (eHD) model and an ab initio method. We compare the results for the real and imaginary parts of the optical conductivity in graphene obtained by these two methods. The calculated probability density is directly compared with the EEL spectra from three independent experiments and we find very good agreement, especially in the case of the eHD model. Furthermore, we point out that the subtraction of the zero-loss peak from the experimental EEL spectra has a strong influence on the analytical model for the EEL spectroscopy data. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Holistic versus Analytic Evaluation of EFL Writing: A Case Study

    ERIC Educational Resources Information Center

    Ghalib, Thikra K.; Al-Hattami, Abdulghani A.

    2015-01-01

    This paper investigates the performance of holistic and analytic scoring rubrics in the context of EFL writing. Specifically, the paper compares EFL students' scores on a writing task using holistic and analytic scoring rubrics. The data for the study was collected from 30 participants attending an English undergraduate program in a Yemeni…

  5. Incorporating photon recycling into the analytical drift-diffusion model of high efficiency solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumb, Matthew P.; Naval Research Laboratory, Washington, DC 20375; Steiner, Myles A.

    The analytical drift-diffusion formalism is able to accurately simulate a wide range of solar cell architectures and was recently extended to include those with back surface reflectors. However, as solar cells approach the limits of material quality, photon recycling effects become increasingly important in predicting the behavior of these cells. In particular, the minority carrier diffusion length is significantly affected by the photon recycling, with consequences for the solar cell performance. In this paper, we outline an approach to account for photon recycling in the analytical Hovel model and compare analytical model predictions to GaAs-based experimental devices operating close tomore » the fundamental efficiency limit.« less

  6. International Space Station ECLSS Technical Task Agreement Summary Report

    NASA Technical Reports Server (NTRS)

    Ray, C. D. (Compiler); Salyer, B. H. (Compiler)

    1999-01-01

    This Technical Memorandum provides a summary of current work accomplished under Technical Task Agreement (TTA) by the Marshall Space Flight Center (MSFC) regarding the International Space Station (ISS) Environmental Control and Life Support System (ECLSS). Current activities include ECLSS component design and development, computer model development, subsystem/integrated system testing, life testing, and general test support provided to the ISS program. Under ECLSS design, MSFC was responsible for the six major ECLSS functions, specifications and standard, component design and development, and was the architectural control agent for the ISS ECLSS. MSFC was responsible for ECLSS analytical model development. In-house subsystem and system level analysis and testing were conducted in support of the design process, including testing air revitalization, water reclamation and management hardware, and certain nonregenerative systems. The activities described herein were approved in task agreements between MSFC and NASA Headquarters Space Station Program Management Office and their prime contractor for the ISS, Boeing. These MSFC activities are in line to the designing, development, testing, and flight of ECLSS equipment planned by Boeing. MSFC's unique capabilities for performing integrated systems testing and analyses, and its ability to perform some tasks cheaper and faster to support ISS program needs, are the basis for the TTA activities.

  7. Cerebellarlike corrective model inference engine for manipulation tasks.

    PubMed

    Luque, Niceto Rafael; Garrido, Jesús Alberto; Carrillo, Richard Rafael; Coenen, Olivier J-M D; Ros, Eduardo

    2011-10-01

    This paper presents how a simple cerebellumlike architecture can infer corrective models in the framework of a control task when manipulating objects that significantly affect the dynamics model of the system. The main motivation of this paper is to evaluate a simplified bio-mimetic approach in the framework of a manipulation task. More concretely, the paper focuses on how the model inference process takes place within a feedforward control loop based on the cerebellar structure and on how these internal models are built up by means of biologically plausible synaptic adaptation mechanisms. This kind of investigation may provide clues on how biology achieves accurate control of non-stiff-joint robot with low-power actuators which involve controlling systems with high inertial components. This paper studies how a basic temporal-correlation kernel including long-term depression (LTD) and a constant long-term potentiation (LTP) at parallel fiber-Purkinje cell synapses can effectively infer corrective models. We evaluate how this spike-timing-dependent plasticity correlates sensorimotor activity arriving through the parallel fibers with teaching signals (dependent on error estimates) arriving through the climbing fibers from the inferior olive. This paper addresses the study of how these LTD and LTP components need to be well balanced with each other to achieve accurate learning. This is of interest to evaluate the relevant role of homeostatic mechanisms in biological systems where adaptation occurs in a distributed manner. Furthermore, we illustrate how the temporal-correlation kernel can also work in the presence of transmission delays in sensorimotor pathways. We use a cerebellumlike spiking neural network which stores the corrective models as well-structured weight patterns distributed among the parallel fibers to Purkinje cell connections.

  8. Decision-making under risk conditions is susceptible to interference by a secondary executive task.

    PubMed

    Starcke, Katrin; Pawlikowski, Mirko; Wolf, Oliver T; Altstötter-Gleich, Christine; Brand, Matthias

    2011-05-01

    Recent research suggests two ways of making decisions: an intuitive and an analytical one. The current study examines whether a secondary executive task interferes with advantageous decision-making in the Game of Dice Task (GDT), a decision-making task with explicit and stable rules that taps executive functioning. One group of participants performed the original GDT solely, two groups performed either the GDT and a 1-back or a 2-back working memory task as a secondary task simultaneously. Results show that the group which performed the GDT and the secondary task with high executive load (2-back) decided less advantageously than the group which did not perform a secondary executive task. These findings give further evidence for the view that decision-making under risky conditions taps into the rational-analytical system which acts in a serial and not parallel way as performance on the GDT is disturbed by a parallel task that also requires executive resources.

  9. Turbofan forced mixer lobe flow modeling. 1: Experimental and analytical assessment

    NASA Technical Reports Server (NTRS)

    Barber, T.; Paterson, R. W.; Skebe, S. A.

    1988-01-01

    A joint analytical and experimental investigation of three-dimensional flowfield development within the lobe region of turbofan forced mixer nozzles is described. The objective was to develop a method for predicting the lobe exit flowfield. In the analytical approach, a linearized inviscid aerodynamical theory was used for representing the axial and secondary flows within the three-dimensional convoluted mixer lobes and three-dimensional boundary layer analysis was applied thereafter to account for viscous effects. The experimental phase of the program employed three planar mixer lobe models having different waveform shapes and lobe heights for which detailed measurements were made of the three-dimensional velocity field and total pressure field at the lobe exit plane. Velocity data was obtained using Laser Doppler Velocimetry (LDV) and total pressure probing and hot wire anemometry were employed to define exit plane total pressure and boundary layer development. Comparison of data and analysis was performed to assess analytical model prediction accuracy. As a result of this study a planar mixed geometry analysis was developed. A principal conclusion is that the global mixer lobe flowfield is inviscid and can be predicted from an inviscid analysis and Kutta condition.

  10. Analytical model for describing ion guiding through capillaries in insulating polymers

    NASA Astrophysics Data System (ADS)

    Liu, Shi-Dong; Zhao, Yong-Tao; Wang, Yu-Yu; N, Stolterfoht; Cheng, Rui; Zhou, Xian-Ming; Xu, Hu-Shan; Xiao, Guo-Qing

    2015-08-01

    An analytical description for guiding of ions through nanocapillaries is given on the basis of previous work. The current entering into the capillary is assumed to be divided into a current fraction transmitted through the capillary, a current fraction flowing away via the capillary conductivity and a current fraction remaining within the capillary, which is responsible for its charge-up. The discharging current is assumed to be governed by the Frenkel-Poole process. At higher conductivities the analytical model shows a blocking of the ion transmission, which is in agreement with recent simulations. Also, it is shown that ion blocking observed in experiments is well reproduced by the analytical formula. Furthermore, the asymptotic fraction of transmitted ions is determined. Apart from the key controlling parameter (charge-to-energy ratio), the ratio of the capillary conductivity to the incident current is included in the model. Differences resulting from the nonlinear and linear limits of the Frenkel-Poole discharge are pointed out. Project supported by the Major State Basic Research Development Program of China (Grant No. 2010CB832902) and the National Natural Science Foundation of China (Grant Nos. 11275241, 11275238, 11105192, and 11375034).

  11. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    PubMed

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  12. Analytical investigation of the faster-is-slower effect with a simplified phenomenological model

    NASA Astrophysics Data System (ADS)

    Suzuno, K.; Tomoeda, A.; Ueyama, D.

    2013-11-01

    We investigate the mechanism of the phenomenon called the “faster-is-slower”effect in pedestrian flow studies analytically with a simplified phenomenological model. It is well known that the flow rate is maximized at a certain strength of the driving force in simulations using the social force model when we consider the discharge of self-driven particles through a bottleneck. In this study, we propose a phenomenological and analytical model based on a mechanics-based modeling to reveal the mechanism of the phenomenon. We show that our reduced system, with only a few degrees of freedom, still has similar properties to the original many-particle system and that the effect comes from the competition between the driving force and the nonlinear friction from the model. Moreover, we predict the parameter dependences on the effect from our model qualitatively, and they are confirmed numerically by using the social force model.

  13. Emergence of Tables as First-Graders Cope with Modelling Tasks

    ERIC Educational Resources Information Center

    Peled, Irit; Keisar, Einav

    2015-01-01

    In this action research, first-graders were challenged to cope with a sequence of modelling tasks involving an analysis of given situations and choices of mathematical tools. In the course of the sequence, they underwent a change in the nature of their problem-solving processes and developed modelling competencies. Moreover, during the task…

  14. A model of the human in a cognitive prediction task.

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1973-01-01

    The human decision maker's behavior when predicting future states of discrete linear dynamic systems driven by zero-mean Gaussian processes is modeled. The task is on a slow enough time scale that physiological constraints are insignificant compared with cognitive limitations. The model is basically a linear regression system identifier with a limited memory and noisy observations. Experimental data are presented and compared to the model.

  15. Self-consistent semi-analytic models of the first stars

    NASA Astrophysics Data System (ADS)

    Visbal, Eli; Haiman, Zoltán; Bryan, Greg L.

    2018-04-01

    We have developed a semi-analytic framework to model the large-scale evolution of the first Population III (Pop III) stars and the transition to metal-enriched star formation. Our model follows dark matter haloes from cosmological N-body simulations, utilizing their individual merger histories and three-dimensional positions, and applies physically motivated prescriptions for star formation and feedback from Lyman-Werner (LW) radiation, hydrogen ionizing radiation, and external metal enrichment due to supernovae winds. This method is intended to complement analytic studies, which do not include clustering or individual merger histories, and hydrodynamical cosmological simulations, which include detailed physics, but are computationally expensive and have limited dynamic range. Utilizing this technique, we compute the cumulative Pop III and metal-enriched star formation rate density (SFRD) as a function of redshift at z ≥ 20. We find that varying the model parameters leads to significant qualitative changes in the global star formation history. The Pop III star formation efficiency and the delay time between Pop III and subsequent metal-enriched star formation are found to have the largest impact. The effect of clustering (i.e. including the three-dimensional positions of individual haloes) on various feedback mechanisms is also investigated. The impact of clustering on LW and ionization feedback is found to be relatively mild in our fiducial model, but can be larger if external metal enrichment can promote metal-enriched star formation over large distances.

  16. An Analytical-Numerical Model for Two-Phase Slug Flow through a Sudden Area Change in Microchannels

    DOE PAGES

    Momen, A. Mehdizadeh; Sherif, S. A.; Lear, W. E.

    2016-01-01

    In this article, two new analytical models have been developed to calculate two-phase slug flow pressure drop in microchannels through a sudden contraction. Even though many studies have been reported on two-phase flow in microchannels, considerable discrepancies still exist, mainly due to the difficulties in experimental setup and measurements. Numerical simulations were performed to support the new analytical models and to explore in more detail the physics of the flow in microchannels with a sudden contraction. Both analytical and numerical results were compared to the available experimental data and other empirical correlations. Results show that models, which were developed basedmore » on the slug and semi-slug assumptions, agree well with experiments in microchannels. Moreover, in contrast to the previous empirical correlations which were tuned for a specific geometry, the new analytical models are capable of taking geometrical parameters as well as flow conditions into account.« less

  17. Numerical modeling and analytical evaluation of light absorption by gold nanostars

    NASA Astrophysics Data System (ADS)

    Zarkov, Sergey; Akchurin, Georgy; Yakunin, Alexander; Avetisyan, Yuri; Akchurin, Garif; Tuchin, Valery

    2018-04-01

    In this paper, the regularity of local light absorption by gold nanostars (AuNSts) model is studied by method of numerical simulation. The mutual diffraction influence of individual geometric fragments of AuNSts is analyzed. A comparison is made with an approximate analytical approach for estimating the average bulk density of absorbed power and total absorbed power by individual geometric fragments of AuNSts. It is shown that the results of the approximate analytical estimate are in qualitative agreement with the numerical calculations of the light absorption by AuNSts.

  18. Rethinking Visual Analytics for Streaming Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris

    In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is

  19. The Indecision Model of Psychophysical Performance in Dual-Presentation Tasks: Parameter Estimation and Comparative Analysis of Response Formats

    PubMed Central

    García-Pérez, Miguel A.; Alcalá-Quintana, Rocío

    2017-01-01

    Psychophysical data from dual-presentation tasks are often collected with the two-alternative forced-choice (2AFC) response format, asking observers to guess when uncertain. For an analytical description of performance, psychometric functions are then fitted to data aggregated across the two orders/positions in which stimuli were presented. Yet, order effects make aggregated data uninterpretable, and the bias with which observers guess when uncertain precludes separating sensory from decisional components of performance. A ternary response format in which observers are also allowed to report indecision should fix these problems, but a comparative analysis with the 2AFC format has never been conducted. In addition, fitting ternary data separated by presentation order poses serious challenges. To address these issues, we extended the indecision model of psychophysical performance to accommodate the ternary, 2AFC, and same–different response formats in detection and discrimination tasks. Relevant issues for parameter estimation are also discussed along with simulation results that document the superiority of the ternary format. These advantages are demonstrated by fitting the indecision model to published detection and discrimination data collected with the ternary, 2AFC, or same–different formats, which had been analyzed differently in the sources. These examples also show that 2AFC data are unsuitable for testing certain types of hypotheses. matlab and R routines written for our purposes are available as Supplementary Material, which should help spread the use of the ternary format for dependable collection and interpretation of psychophysical data. PMID:28747893

  20. Task Inhibition and Response Inhibition in Older vs. Younger Adults: A Diffusion Model Analysis

    PubMed Central

    Schuch, Stefanie

    2016-01-01

    Differences in inhibitory ability between older (64–79 years, N = 24) and younger adults (18–26 years, N = 24) were investigated using a diffusion model analysis. Participants performed a task-switching paradigm that allows assessing n−2 task repetition costs, reflecting inhibitory control on the level of tasks, as well as n−1 response-repetition costs, reflecting inhibitory control on the level of responses. N−2 task repetition costs were of similar size in both age groups. Diffusion model analysis revealed that for both younger and older adults, drift rate parameters were smaller in the inhibition condition relative to the control condition, consistent with the idea that persisting task inhibition slows down response selection. Moreover, there was preliminary evidence for task inhibition effects in threshold separation and non-decision time in the older, but not the younger adults, suggesting that older adults might apply different strategies when dealing with persisting task inhibition. N−1 response-repetition costs in mean RT were larger in older than younger adults, but in mean error rates tended to be larger in younger than older adults. Diffusion-model analysis revealed longer non-decision times in response repetitions than response switches in both age groups, consistent with the idea that motor processes take longer in response repetitions than response switches due to persisting response inhibition of a previously executed response. The data also revealed age-related differences in overall performance: Older adults responded more slowly and more accurately than young adults, which was reflected by a higher threshold separation parameter in diffusion model analysis. Moreover, older adults showed larger non-decision times and higher variability in non-decision time than young adults, possibly reflecting slower and more variable motor processes. In contrast, overall drift rate did not differ between older and younger adults. Taken together

  1. Cultivating Institutional Capacities for Learning Analytics

    ERIC Educational Resources Information Center

    Lonn, Steven; McKay, Timothy A.; Teasley, Stephanie D.

    2017-01-01

    This chapter details the process the University of Michigan developed to build institutional capacity for learning analytics. A symposium series, faculty task force, fellows program, research grants, and other initiatives are discussed, with lessons learned for future efforts and how other institutions might adapt such efforts to spur cultural…

  2. Silica exposure during construction activities: statistical modeling of task-based measurements from the literature.

    PubMed

    Sauvé, Jean-François; Beaudry, Charles; Bégin, Denis; Dion, Chantal; Gérin, Michel; Lavoué, Jérôme

    2013-05-01

    Many construction activities can put workers at risk of breathing silica containing dusts, and there is an important body of literature documenting exposure levels using a task-based strategy. In this study, statistical modeling was used to analyze a data set containing 1466 task-based, personal respirable crystalline silica (RCS) measurements gathered from 46 sources to estimate exposure levels during construction tasks and the effects of determinants of exposure. Monte-Carlo simulation was used to recreate individual exposures from summary parameters, and the statistical modeling involved multimodel inference with Tobit models containing combinations of the following exposure variables: sampling year, sampling duration, construction sector, project type, workspace, ventilation, and controls. Exposure levels by task were predicted based on the median reported duration by activity, the year 1998, absence of source control methods, and an equal distribution of the other determinants of exposure. The model containing all the variables explained 60% of the variability and was identified as the best approximating model. Of the 27 tasks contained in the data set, abrasive blasting, masonry chipping, scabbling concrete, tuck pointing, and tunnel boring had estimated geometric means above 0.1mg m(-3) based on the exposure scenario developed. Water-fed tools and local exhaust ventilation were associated with a reduction of 71 and 69% in exposure levels compared with no controls, respectively. The predictive model developed can be used to estimate RCS concentrations for many construction activities in a wide range of circumstances.

  3. Empirical and semi-analytical models for predicting peak outflows caused by embankment dam failures

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Chen, Yunliang; Wu, Chao; Peng, Yong; Song, Jiajun; Liu, Wenjun; Liu, Xin

    2018-07-01

    Prediction of peak discharge of floods has attracted great attention for researchers and engineers. In present study, nine typical nonlinear mathematical models are established based on database of 40 historical dam failures. The first eight models that were developed with a series of regression analyses are purely empirical, while the last one is a semi-analytical approach that was derived from an analytical solution of dam-break floods in a trapezoidal channel. Water depth above breach invert (Hw), volume of water stored above breach invert (Vw), embankment length (El), and average embankment width (Ew) are used as independent variables to develop empirical formulas of estimating the peak outflow from breached embankment dams. It is indicated from the multiple regression analysis that a function using the former two variables (i.e., Hw and Vw) produce considerably more accurate results than that using latter two variables (i.e., El and Ew). It is shown that the semi-analytical approach works best in terms of both prediction accuracy and uncertainty, and the established empirical models produce considerably reasonable results except the model only using El. Moreover, present models have been compared with other models available in literature for estimating peak discharge.

  4. Pilot-model analysis and simulation study of effect of control task desired control response

    NASA Technical Reports Server (NTRS)

    Adams, J. J.; Gera, J.; Jaudon, J. B.

    1978-01-01

    A pilot model analysis was performed that relates pilot control compensation, pilot aircraft system response, and aircraft response characteristics for longitudinal control. The results show that a higher aircraft short period frequency is required to achieve superior pilot aircraft system response in an altitude control task than is required in an attitude control task. These results were confirmed by a simulation study of target tracking. It was concluded that the pilot model analysis provides a theoretical basis for determining the effect of control task on pilot opinions.

  5. Hemispheric specialization and creative thinking: a meta-analytic review of lateralization of creativity.

    PubMed

    Mihov, Konstantin M; Denzler, Markus; Förster, Jens

    2010-04-01

    In the last two decades research on the neurophysiological processes of creativity has found contradicting results. Whereas most research suggests right hemisphere dominance in creative thinking, left-hemisphere dominance has also been reported. The present research is a meta-analytic review of the literature to establish how creative thinking relates to relative hemispheric dominance. The analysis was performed on the basis of a non-parametric vote-counting approach and effect-size calculations of Cramer's phi suggest relative dominance of the right hemisphere during creative thinking. Moderator analyses revealed no difference in predominant right-hemispheric activation for verbal vs. figural tasks, holistic vs. analytical tasks, and context-dependent vs. context-independent tasks. Suggestions for further investigations with the meta-analytic and neuroscience methodologies to answer the questions of left hemispheric activation and further moderation of the effects are discussed. Copyright 2009 Elsevier Inc. All rights reserved.

  6. Analytical properties of a three-compartmental dynamical demographic model

    NASA Astrophysics Data System (ADS)

    Postnikov, E. B.

    2015-07-01

    The three-compartmental demographic model by Korotaeyv-Malkov-Khaltourina, connecting population size, economic surplus, and education level, is considered from the point of view of dynamical systems theory. It is shown that there exist two integrals of motion, which enables the system to be reduced to one nonlinear ordinary differential equation. The study of its structure provides analytical criteria for the dominance ranges of the dynamics of Malthus and Kremer. Additionally, the particular ranges of parameters enable the derived general ordinary differential equations to be reduced to the models of Gompertz and Thoularis-Wallace.

  7. Instinctive analytics for coalition operations (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    de Mel, Geeth R.; La Porta, Thomas; Pham, Tien; Pearson, Gavin

    2017-05-01

    The success of future military coalition operations—be they combat or humanitarian—will increasingly depend on a system's ability to share data and processing services (e.g. aggregation, summarization, fusion), and automatically compose services in support of complex tasks at the network edge. We call such an infrastructure instinctive—i.e., an infrastructure that reacts instinctively to address the analytics task at hand. However, developing such an infrastructure is made complex for the coalition environment due to its dynamism both in terms of user requirements and service availability. In order to address the above challenge, in this paper, we highlight our research vision and sketch some initial solutions into the problem domain. Specifically, we propose means to (1) automatically infer formal task requirements from mission specifications; (2) discover data, services, and their features automatically to satisfy the identified requirements; (3) create and augment shared domain models automatically; (4) efficiently offload services to the network edge and across coalition boundaries adhering to their computational properties and costs; and (5) optimally allocate and adjust services while respecting the constraints of operating environment and service fit. We envision that the research will result in a framework which enables self-description, discover, and assemble capabilities to both data and services in support of coalition mission goals.

  8. Analysis of structural dynamic data from Skylab. Volume 2: Skylab analytical and test model data

    NASA Technical Reports Server (NTRS)

    Demchak, L.; Harcrow, H.

    1976-01-01

    The orbital configuration test modal data, analytical test correlation modal data, and analytical flight configuration modal data are presented. Tables showing the generalized mass contributions (GMCs) for each of the thirty tests modes are given along with the two dimensional mode shape plots and tables of GMCs for the test correlated analytical modes. The two dimensional mode shape plots for the analytical modes and uncoupled and coupled modes of the orbital flight configuration at three development phases of the model are included.

  9. Modeling Alzheimer's disease cognitive scores using multi-task sparse group lasso.

    PubMed

    Liu, Xiaoli; Goncalves, André R; Cao, Peng; Zhao, Dazhe; Banerjee, Arindam

    2018-06-01

    Alzheimer's disease (AD) is a severe neurodegenerative disorder characterized by loss of memory and reduction in cognitive functions due to progressive degeneration of neurons and their connections, eventually leading to death. In this paper, we consider the problem of simultaneously predicting several different cognitive scores associated with categorizing subjects as normal, mild cognitive impairment (MCI), or Alzheimer's disease (AD) in a multi-task learning framework using features extracted from brain images obtained from ADNI (Alzheimer's Disease Neuroimaging Initiative). To solve the problem, we present a multi-task sparse group lasso (MT-SGL) framework, which estimates sparse features coupled across tasks, and can work with loss functions associated with any Generalized Linear Models. Through comparisons with a variety of baseline models using multiple evaluation metrics, we illustrate the promising predictive performance of MT-SGL on ADNI along with its ability to identify brain regions more likely to help the characterization Alzheimer's disease progression. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. IT vendor selection model by using structural equation model & analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  11. Facilitating an L2 Book Club: A Conversation-Analytic Study of Task Management

    ERIC Educational Resources Information Center

    Ro, Eunseok

    2018-01-01

    This study employs conversation analysis to examine a facilitator's interactional practices in the post-expansion phase of students' presentations in the context of a book club for second language learning. The analysis shows how the facilitator establishes intersubjectivity with regard to the ongoing task and manages students' task performance.…

  12. An improved task-role-based access control model for G-CSCW applications

    NASA Astrophysics Data System (ADS)

    He, Chaoying; Chen, Jun; Jiang, Jie; Han, Gang

    2005-10-01

    Access control is an important and popular security mechanism for multi-user applications. GIS-based Computer Supported Cooperative Work (G-CSCW) application is one of such applications. This paper presents an improved Task-Role-Based Access Control (X-TRBAC) model for G-CSCW applications. The new model inherits the basic concepts of the old ones, such as role and task. Moreover, it has introduced two concepts, i.e. object hierarchy and operation hierarchy, and the corresponding rules to improve the efficiency of permission definition in access control models. The experiments show that the method can simplify the definition of permissions, and it is more applicable for G-CSCW applications.

  13. Modelling a flows in supply chain with analytical models: Case of a chemical industry

    NASA Astrophysics Data System (ADS)

    Benhida, Khalid; Azougagh, Yassine; Elfezazi, Said

    2016-02-01

    This study is interested on the modelling of the logistics flows in a supply chain composed on a production sites and a logistics platform. The contribution of this research is to develop an analytical model (integrated linear programming model), based on a case study of a real company operating in the phosphate field, considering a various constraints in this supply chain to resolve the planning problems for a better decision-making. The objectives of this model is to determine and define the optimal quantities of different products to route, to and from the various entities in the supply chain studied.

  14. Chandra ACIS-I particle background: an analytical model

    NASA Astrophysics Data System (ADS)

    Bartalucci, I.; Mazzotta, P.; Bourdin, H.; Vikhlinin, A.

    2014-06-01

    Aims: Imaging and spectroscopy of X-ray extended sources require a proper characterisation of a spatially unresolved background signal. This background includes sky and instrumental components, each of which are characterised by its proper spatial and spectral behaviour. While the X-ray sky background has been extensively studied in previous work, here we analyse and model the instrumental background of the ACIS-I detector on board the Chandra X-ray observatory in very faint mode. Methods: Caused by interaction of highly energetic particles with the detector, the ACIS-I instrumental background is spectrally characterised by the superimposition of several fluorescence emission lines onto a continuum. To isolate its flux from any sky component, we fitted an analytical model of the continuum to observations performed in very faint mode with the detector in the stowed position shielded from the sky, and gathered over the eight-year period starting in 2001. The remaining emission lines were fitted to blank-sky observations of the same period. We found 11 emission lines. Analysing the spatial variation of the amplitude, energy and width of these lines has further allowed us to infer that three lines of these are presumably due to an energy correction artefact produced in the frame store. Results: We provide an analytical model that predicts the instrumental background with a precision of 2% in the continuum and 5% in the lines. We use this model to measure the flux of the unresolved cosmic X-ray background in the Chandra deep field south. We obtain a flux of 10.2+0.5-0.4 × 10-13 erg cm-2 deg-2 s-1 for the [1-2] keV band and (3.8 ± 0.2) × 10-12 erg cm-2 deg-2 s-1 for the [2-8] keV band.

  15. A constructivist connectionist model of transitions on false-belief tasks.

    PubMed

    Berthiaume, Vincent G; Shultz, Thomas R; Onishi, Kristine H

    2013-03-01

    How do children come to understand that others have mental representations, e.g., of an object's location? Preschoolers go through two transitions on verbal false-belief tasks, in which they have to predict where an agent will search for an object that was moved in her absence. First, while three-and-a-half-year-olds usually fail at approach tasks, in which the agent wants to find the object, children just under four succeed. Second, only after four do children succeed at tasks in which the agent wants to avoid the object. We present a constructivist connectionist model that autonomously reproduces the two transitions and suggests that the transitions are due to increases in general processing abilities enabling children to (1) overcome a default true-belief attribution by distinguishing false- from true-belief situations, and to (2) predict search in avoidance situations, where there is often more than one correct, empty search location. Constructivist connectionist models are rigorous, flexible and powerful tools that can be analyzed before and after transitions to uncover novel and emergent mechanisms of cognitive development. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Modeling and simulation of dynamic ant colony's labor division for task allocation of UAV swarm

    NASA Astrophysics Data System (ADS)

    Wu, Husheng; Li, Hao; Xiao, Renbin; Liu, Jie

    2018-02-01

    The problem of unmanned aerial vehicle (UAV) task allocation not only has the intrinsic attribute of complexity, such as highly nonlinear, dynamic, highly adversarial and multi-modal, but also has a better practicability in various multi-agent systems, which makes it more and more attractive recently. In this paper, based on the classic fixed response threshold model (FRTM), under the idea of "problem centered + evolutionary solution" and by a bottom-up way, the new dynamic environmental stimulus, response threshold and transition probability are designed, and a dynamic ant colony's labor division (DACLD) model is proposed. DACLD allows a swarm of agents with a relatively low-level of intelligence to perform complex tasks, and has the characteristic of distributed framework, multi-tasks with execution order, multi-state, adaptive response threshold and multi-individual response. With the proposed model, numerical simulations are performed to illustrate the effectiveness of the distributed task allocation scheme in two situations of UAV swarm combat (dynamic task allocation with a certain number of enemy targets and task re-allocation due to unexpected threats). Results show that our model can get both the heterogeneous UAVs' real-time positions and states at the same time, and has high degree of self-organization, flexibility and real-time response to dynamic environments.

  17. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.

    PubMed

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-08-30

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.

  18. Time series modeling of human operator dynamics in manual control tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.

  19. Time Series Modeling of Human Operator Dynamics in Manual Control Tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.

  20. HPAEC-PAD for oligosaccharide analysis-novel insights into analyte sensitivity and response stability.

    PubMed

    Mechelke, Matthias; Herlet, Jonathan; Benz, J Philipp; Schwarz, Wolfgang H; Zverlov, Vladimir V; Liebl, Wolfgang; Kornberger, Petra

    2017-12-01

    The rising importance of accurately detecting oligosaccharides in biomass hydrolyzates or as ingredients in food, such as in beverages and infant milk products, demands for the availability of tools to sensitively analyze the broad range of available oligosaccharides. Over the last decades, HPAEC-PAD has been developed into one of the major technologies for this task and represents a popular alternative to state-of-the-art LC-MS oligosaccharide analysis. This work presents the first comprehensive study which gives an overview of the separation of 38 analytes as well as enzymatic hydrolyzates of six different polysaccharides focusing on oligosaccharides. The high sensitivity of the PAD comes at cost of its stability due to recession of the gold electrode. By an in-depth analysis of the sensitivity drop over time for 35 analytes, including xylo- (XOS), arabinoxylo- (AXOS), laminari- (LOS), manno- (MOS), glucomanno- (GMOS), and cellooligosaccharides (COS), we developed an analyte-specific one-phase decay model for this effect over time. Using this model resulted in significantly improved data normalization when using an internal standard. Our results thereby allow a quantification approach which takes the inevitable and analyte-specific PAD response drop into account. Graphical abstract HPAEC-PAD analysis of oligosaccharides and determination of PAD response drop leading to an improved data normalization.

  1. Analytic and heuristic processes in the detection and resolution of conflict.

    PubMed

    Ferreira, Mário B; Mata, André; Donkin, Christopher; Sherman, Steven J; Ihmels, Max

    2016-10-01

    Previous research with the ratio-bias task found larger response latencies for conflict trials where the heuristic- and analytic-based responses are assumed to be in opposition (e.g., choosing between 1/10 and 9/100 ratios of success) when compared to no-conflict trials where both processes converge on the same response (e.g., choosing between 1/10 and 11/100). This pattern is consistent with parallel dual-process models, which assume that there is effective, rather than lax, monitoring of the output of heuristic processing. It is, however, unclear why conflict resolution sometimes fails. Ratio-biased choices may increase because of a decline in analytical reasoning (leaving heuristic-based responses unopposed) or to a rise in heuristic processing (making it more difficult for analytic processes to override the heuristic preferences). Using the process-dissociation procedure, we found that instructions to respond logically and response speed affected analytic (controlled) processing (C), leaving heuristic processing (H) unchanged, whereas the intuitive preference for large nominators (as assessed by responses to equal ratio trials) affected H but not C. These findings create new challenges to the debate between dual-process and single-process accounts, which are discussed.

  2. A Simple Analytical Model for Magnetization and Coercivity of Hard/Soft Nanocomposite Magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jihoon; Hong, Yang-Ki; Lee, Woncheol

    Here, we present a simple analytical model to estimate the magnetization (σ s) and intrinsic coercivity (Hci) of a hard/soft nanocomposite magnet using the mass fraction. Previously proposed models are based on the volume fraction of the hard phase of the composite. But, it is difficult to measure the volume of the hard or soft phase material of a composite. We synthesized Sm 2Co 7/Fe-Co, MnAl/Fe-Co, MnBi/Fe-Co, and BaFe 12O 19/Fe-Co composites for characterization of their σs and Hci. The experimental results are in good agreement with the present model. Therefore, this analytical model can be extended to predict themore » maximum energy product (BH) max of hard/soft composite.« less

  3. A Simple Analytical Model for Magnetization and Coercivity of Hard/Soft Nanocomposite Magnets

    DOE PAGES

    Park, Jihoon; Hong, Yang-Ki; Lee, Woncheol; ...

    2017-07-10

    Here, we present a simple analytical model to estimate the magnetization (σ s) and intrinsic coercivity (Hci) of a hard/soft nanocomposite magnet using the mass fraction. Previously proposed models are based on the volume fraction of the hard phase of the composite. But, it is difficult to measure the volume of the hard or soft phase material of a composite. We synthesized Sm 2Co 7/Fe-Co, MnAl/Fe-Co, MnBi/Fe-Co, and BaFe 12O 19/Fe-Co composites for characterization of their σs and Hci. The experimental results are in good agreement with the present model. Therefore, this analytical model can be extended to predict themore » maximum energy product (BH) max of hard/soft composite.« less

  4. A quantitative model of optimal data selection in Wason's selection task.

    PubMed

    Hattori, Masasi

    2002-10-01

    The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.

  5. Cue-Independent Task-Specific Representations in Task Switching: Evidence from Backward Inhibition

    ERIC Educational Resources Information Center

    Altmann, Erik M.

    2007-01-01

    The compound-cue model of cognitive control in task switching explains switch cost in terms of a switch of task cues rather than of a switch of tasks. The present study asked whether the model generalizes to Lag 2 repetition cost (also known as backward inhibition), a related effect in which the switch from B to A in ABA task sequences is costlier…

  6. An analytic current-voltage model for quasi-ballistic III-nitride high electron mobility transistors

    NASA Astrophysics Data System (ADS)

    Li, Kexin; Rakheja, Shaloo

    2018-05-01

    We present an analytic model to describe the DC current-voltage (I-V) relationship in scaled III-nitride high electron mobility transistors (HEMTs) in which transport within the channel is quasi-ballistic in nature. Following Landauer's transport theory and charge calculation based on two-dimensional electrostatics that incorporates negative momenta states from the drain terminal, an analytic expression for current as a function of terminal voltages is developed. The model interprets the non-linearity of access regions in non-self-aligned HEMTs. Effects of Joule heating with temperature-dependent thermal conductivity are incorporated in the model in a self-consistent manner. With a total of 26 input parameters, the analytic model offers reduced empiricism compared to existing GaN HEMT models. To verify the model, experimental I-V data of InAlN/GaN with InGaN back-barrier HEMTs with channel lengths of 42 and 105 nm are considered. Additionally, the model is validated against numerical I-V data obtained from DC hydrodynamic simulations of an unintentionally doped AlGaN-on-GaN HEMT with 50-nm gate length. The model is also verified against pulsed I-V measurements of a 150-nm T-gate GaN HEMT. Excellent agreement between the model and experimental and numerical results for output current, transconductance, and output conductance is demonstrated over a broad range of bias and temperature conditions.

  7. Magnetically-driven medical robots: An analytical magnetic model for endoscopic capsules design

    NASA Astrophysics Data System (ADS)

    Li, Jing; Barjuei, Erfan Shojaei; Ciuti, Gastone; Hao, Yang; Zhang, Peisen; Menciassi, Arianna; Huang, Qiang; Dario, Paolo

    2018-04-01

    Magnetic-based approaches are highly promising to provide innovative solutions for the design of medical devices for diagnostic and therapeutic procedures, such as in the endoluminal districts. Due to the intrinsic magnetic properties (no current needed) and the high strength-to-size ratio compared with electromagnetic solutions, permanent magnets are usually embedded in medical devices. In this paper, a set of analytical formulas have been derived to model the magnetic forces and torques which are exerted by an arbitrary external magnetic field on a permanent magnetic source embedded in a medical robot. In particular, the authors modelled cylindrical permanent magnets as general solution often used and embedded in magnetically-driven medical devices. The analytical model can be applied to axially and diametrically magnetized, solid and annular cylindrical permanent magnets in the absence of the severe calculation complexity. Using a cylindrical permanent magnet as a selected solution, the model has been applied to a robotic endoscopic capsule as a pilot study in the design of magnetically-driven robots.

  8. A queueing model of pilot decision making in a multi-task flight management situation

    NASA Technical Reports Server (NTRS)

    Walden, R. S.; Rouse, W. B.

    1977-01-01

    Allocation of decision making responsibility between pilot and computer is considered and a flight management task, designed for the study of pilot-computer interaction, is discussed. A queueing theory model of pilot decision making in this multi-task, control and monitoring situation is presented. An experimental investigation of pilot decision making and the resulting model parameters are discussed.

  9. Analytic models of ducted turbomachinery tone noise sources. Volume 1: Analysis

    NASA Technical Reports Server (NTRS)

    Clark, T. L.; Ganz, U. W.; Graf, G. A.; Westall, J. S.

    1974-01-01

    The analytic models developed for computing the periodic sound pressure of subsonic fans and compressors in an infinite, hardwall annular duct with uniform flow are described. The basic sound-generating mechanism is the scattering into sound waves of velocity disturbances appearing to the rotor or stator blades as a series of harmonic gusts. The models include component interactions and rotor alone.

  10. Adaptive effort investment in cognitive and physical tasks: a neurocomputational model

    PubMed Central

    Verguts, Tom; Vassena, Eliana; Silvetti, Massimo

    2015-01-01

    Despite its importance in everyday life, the computational nature of effort investment remains poorly understood. We propose an effort model obtained from optimality considerations, and a neurocomputational approximation to the optimal model. Both are couched in the framework of reinforcement learning. It is shown that choosing when or when not to exert effort can be adaptively learned, depending on rewards, costs, and task difficulty. In the neurocomputational model, the limbic loop comprising anterior cingulate cortex (ACC) and ventral striatum in the basal ganglia allocates effort to cortical stimulus-action pathways whenever this is valuable. We demonstrate that the model approximates optimality. Next, we consider two hallmark effects from the cognitive control literature, namely proportion congruency and sequential congruency effects. It is shown that the model exerts both proactive and reactive cognitive control. Then, we simulate two physical effort tasks. In line with empirical work, impairing the model's dopaminergic pathway leads to apathetic behavior. Thus, we conceptually unify the exertion of cognitive and physical effort, studied across a variety of literatures (e.g., motivation and cognitive control) and animal species. PMID:25805978

  11. Visual Attention Allocation Between Robotic Arm and Environmental Process Control: Validating the STOM Task Switching Model

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Vieanne, Alex; Clegg, Benjamin; Sebok, Angelia; Janes, Jessica

    2015-01-01

    Fifty six participants time shared a spacecraft environmental control system task with a realistic space robotic arm control task in either a manual or highly automated version. The former could suffer minor failures, whose diagnosis and repair were supported by a decision aid. At the end of the experiment this decision aid unexpectedly failed. We measured visual attention allocation and switching between the two tasks, in each of the eight conditions formed by manual-automated arm X expected-unexpected failure X monitoring- failure management. We also used our multi-attribute task switching model, based on task attributes of priority interest, difficulty and salience that were self-rated by participants, to predict allocation. An un-weighted model based on attributes of difficulty, interest and salience accounted for 96 percent of the task allocation variance across the 8 different conditions. Task difficulty served as an attractor, with more difficult tasks increasing the tendency to stay on task.

  12. An explicit closed-form analytical solution for European options under the CGMY model

    NASA Astrophysics Data System (ADS)

    Chen, Wenting; Du, Meiyu; Xu, Xiang

    2017-01-01

    In this paper, we consider the analytical pricing of European path-independent options under the CGMY model, which is a particular type of pure jump Le´vy process, and agrees well with many observed properties of the real market data by allowing the diffusions and jumps to have both finite and infinite activity and variation. It is shown that, under this model, the option price is governed by a fractional partial differential equation (FPDE) with both the left-side and right-side spatial-fractional derivatives. In comparison to derivatives of integer order, fractional derivatives at a point not only involve properties of the function at that particular point, but also the information of the function in a certain subset of the entire domain of definition. This ;globalness; of the fractional derivatives has added an additional degree of difficulty when either analytical methods or numerical solutions are attempted. Albeit difficult, we still have managed to derive an explicit closed-form analytical solution for European options under the CGMY model. Based on our solution, the asymptotic behaviors of the option price and the put-call parity under the CGMY model are further discussed. Practically, a reliable numerical evaluation technique for the current formula is proposed. With the numerical results, some analyses of impacts of four key parameters of the CGMY model on European option prices are also provided.

  13. Analytical modeling and numerical simulation of the short-wave infrared electron-injection detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Movassaghi, Yashar; Fathipour, Morteza; Fathipour, Vala

    2016-03-21

    This paper describes comprehensive analytical and simulation models for the design and optimization of the electron-injection based detectors. The electron-injection detectors evaluated here operate in the short-wave infrared range and utilize a type-II band alignment in InP/GaAsSb/InGaAs material system. The unique geometry of detectors along with an inherent negative-feedback mechanism in the device allows for achieving high internal avalanche-free amplifications without any excess noise. Physics-based closed-form analytical models are derived for the detector rise time and dark current. Our optical gain model takes into account the drop in the optical gain at high optical power levels. Furthermore, numerical simulation studiesmore » of the electrical characteristics of the device show good agreement with our analytical models as well experimental data. Performance comparison between devices with different injector sizes shows that enhancement in the gain and speed is anticipated by reducing the injector size. Sensitivity analysis for the key detector parameters shows the relative importance of each parameter. The results of this study may provide useful information and guidelines for development of future electron-injection based detectors as well as other heterojunction photodetectors.« less

  14. An opportunity cost model of subjective effort and task performance

    PubMed Central

    Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus

    2013-01-01

    Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775

  15. Bridging Numerical and Analytical Models of Transient Travel Time Distributions: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective

  16. Analytical and multibody modeling for the power analysis of standing jumps.

    PubMed

    Palmieri, G; Callegari, M; Fioretti, S

    2015-01-01

    Two methods for the power analysis of standing jumps are proposed and compared in this article. The first method is based on a simple analytical formulation which requires as input the coordinates of the center of gravity in three specified instants of the jump. The second method is based on a multibody model that simulates the jumps processing the data obtained by a three-dimensional (3D) motion capture system and the dynamometric measurements obtained by the force platforms. The multibody model is developed with OpenSim, an open-source software which provides tools for the kinematic and dynamic analyses of 3D human body models. The study is focused on two of the typical tests used to evaluate the muscular activity of lower limbs, which are the counter movement jump and the standing long jump. The comparison between the results obtained by the two methods confirms that the proposed analytical formulation is correct and represents a simple tool suitable for a preliminary analysis of total mechanical work and the mean power exerted in standing jumps.

  17. An Analytical Model for Two-Order Asperity Degradation of Rock Joints Under Constant Normal Stiffness Conditions

    NASA Astrophysics Data System (ADS)

    Li, Yingchun; Wu, Wei; Li, Bo

    2018-05-01

    Jointed rock masses during underground excavation are commonly located under the constant normal stiffness (CNS) condition. This paper presents an analytical formulation to predict the shear behaviour of rough rock joints under the CNS condition. The dilatancy and deterioration of two-order asperities are quantified by considering the variation of normal stress. We separately consider the dilation angles of waviness and unevenness, which decrease to zero as the normal stress approaches the transitional stress. The sinusoidal function naturally yields the decay of dilation angle as a function of relative normal stress. We assume that the magnitude of transitional stress is proportionate to the square root of asperity geometric area. The comparison between the analytical prediction and experimental data shows the reliability of the analytical model. All the parameters involved in the analytical model possess explicit physical meanings and are measurable from laboratory tests. The proposed model is potentially practicable for assessing the stability of underground structures at various field scales.

  18. Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review.

    PubMed

    Xiao, Cao; Choi, Edward; Sun, Jimeng

    2018-06-08

    To conduct a systematic review of deep learning models for electronic health record (EHR) data, and illustrate various deep learning architectures for analyzing different data sources and their target applications. We also highlight ongoing research and identify open challenges in building deep learning models of EHRs. We searched PubMed and Google Scholar for papers on deep learning studies using EHR data published between January 1, 2010, and January 31, 2018. We summarize them according to these axes: types of analytics tasks, types of deep learning model architectures, special challenges arising from health data and tasks and their potential solutions, as well as evaluation strategies. We surveyed and analyzed multiple aspects of the 98 articles we found and identified the following analytics tasks: disease detection/classification, sequential prediction of clinical events, concept embedding, data augmentation, and EHR data privacy. We then studied how deep architectures were applied to these tasks. We also discussed some special challenges arising from modeling EHR data and reviewed a few popular approaches. Finally, we summarized how performance evaluations were conducted for each task. Despite the early success in using deep learning for health analytics applications, there still exist a number of issues to be addressed. We discuss them in detail including data and label availability, the interpretability and transparency of the model, and ease of deployment.

  19. Development of Aeroservoelastic Analytical Models and Gust Load Alleviation Control Laws of a SensorCraft Wind-Tunnel Model Using Measured Data

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Vartio, Eric; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott,Robert C.

    2007-01-01

    Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.

  20. Development of Aeroservoelastic Analytical Models and Gust Load Alleviation Control Laws of a SensorCraft Wind-Tunnel Model Using Measured Data

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott, Robert C.

    2006-01-01

    Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.

  1. A Conceptual Analytics Model for an Outcome-Driven Quality Management Framework as Part of Professional Healthcare Education.

    PubMed

    Hervatis, Vasilis; Loe, Alan; Barman, Linda; O'Donoghue, John; Zary, Nabil

    2015-10-06

    Preparing the future health care professional workforce in a changing world is a significant undertaking. Educators and other decision makers look to evidence-based knowledge to improve quality of education. Analytics, the use of data to generate insights and support decisions, have been applied successfully across numerous application domains. Health care professional education is one area where great potential is yet to be realized. Previous research of Academic and Learning analytics has mainly focused on technical issues. The focus of this study relates to its practical implementation in the setting of health care education. The aim of this study is to create a conceptual model for a deeper understanding of the synthesizing process, and transforming data into information to support educators' decision making. A deductive case study approach was applied to develop the conceptual model. The analytics loop works both in theory and in practice. The conceptual model encompasses the underlying data, the quality indicators, and decision support for educators. The model illustrates how a theory can be applied to a traditional data-driven analytics approach, and alongside the context- or need-driven analytics approach.

  2. A Conceptual Analytics Model for an Outcome-Driven Quality Management Framework as Part of Professional Healthcare Education

    PubMed Central

    Loe, Alan; Barman, Linda; O'Donoghue, John; Zary, Nabil

    2015-01-01

    Background Preparing the future health care professional workforce in a changing world is a significant undertaking. Educators and other decision makers look to evidence-based knowledge to improve quality of education. Analytics, the use of data to generate insights and support decisions, have been applied successfully across numerous application domains. Health care professional education is one area where great potential is yet to be realized. Previous research of Academic and Learning analytics has mainly focused on technical issues. The focus of this study relates to its practical implementation in the setting of health care education. Objective The aim of this study is to create a conceptual model for a deeper understanding of the synthesizing process, and transforming data into information to support educators’ decision making. Methods A deductive case study approach was applied to develop the conceptual model. Results The analytics loop works both in theory and in practice. The conceptual model encompasses the underlying data, the quality indicators, and decision support for educators. Conclusions The model illustrates how a theory can be applied to a traditional data-driven analytics approach, and alongside the context- or need-driven analytics approach. PMID:27731840

  3. Task-Based Information Searching.

    ERIC Educational Resources Information Center

    Vakkari, Pertti

    2003-01-01

    Reviews studies on the relationship between task performance and information searching by end-users, focusing on information searching in electronic environments and information retrieval systems. Topics include task analysis; task characteristics; search goals; modeling information searching; modeling search goals; information seeking behavior;…

  4. Children's Task Engagement during Challenging Puzzle Tasks

    ERIC Educational Resources Information Center

    Wang, Feihong; Algina, James; Snyder, Patricia; Cox, Martha

    2017-01-01

    We examined children's task engagement during a challenging puzzle task in the presence of their primary caregivers by using a representative sample of rural children from six high-poverty counties across two states. Weighted longitudinal confirmatory factor analysis and structural equation modeling were used to identify a task engagement factor…

  5. Application of Characterization, Modeling, and Analytics Towards Understanding Process Structure Linkages in Metallic 3D Printing (Postprint)

    DTIC Science & Technology

    2017-08-01

    of metallic additive manufacturing processes and show that combining experimental data with modelling and advanced data processing and analytics...manufacturing processes and show that combining experimental data with modelling and advanced data processing and analytics methods will accelerate that...geometries, we develop a methodology that couples experimental data and modelling to convert the scan paths into spatially resolved local thermal histories

  6. Training apartment upkeep skills to rehabilitation clients: a comparison of task analytic strategies.

    PubMed Central

    Williams, G E; Cuvo, A J

    1986-01-01

    The research was designed to validate procedures to teach apartment upkeep skills to severely handicapped clients with various categorical disabilities. Methodological features of this research included performance comparisons between general and specific task analyses, effect of an impasse correction baseline procedure, social validation of training goals, natural environment assessments and contingencies, as well as long-term follow-up. Subjects were taught to perform upkeep responses on their air conditioner-heating unit, electric range, refrigerator, and electrical appliances within the context of a multiple-probe across subjects experimental design. The results showed acquisition, long-term maintenance, and generalization of the upkeep skills to a nontraining apartment. General task analyses were recommended for assessment and specific task analyses for training. The impasse correction procedure generally did not produce acquisition. PMID:3710947

  7. "Photographing money" task pricing

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiang

    2018-05-01

    "Photographing money" [1]is a self-service model under the mobile Internet. The task pricing is reasonable, related to the success of the commodity inspection. First of all, we analyzed the position of the mission and the membership, and introduced the factor of membership density, considering the influence of the number of members around the mission on the pricing. Multivariate regression of task location and membership density using MATLAB to establish the mathematical model of task pricing. At the same time, we can see from the life experience that membership reputation and the intensity of the task will also affect the pricing, and the data of the task success point is more reliable. Therefore, the successful point of the task is selected, and its reputation, task density, membership density and Multiple regression of task positions, according to which a nhew task pricing program. Finally, an objective evaluation is given of the advantages and disadvantages of the established model and solution method, and the improved method is pointed out.

  8. A Bayesian hierarchical diffusion model decomposition of performance in Approach–Avoidance Tasks

    PubMed Central

    Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan

    2015-01-01

    Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest. PMID:25491372

  9. A Decision Model for Supporting Task Allocation Processes in Global Software Development

    NASA Astrophysics Data System (ADS)

    Lamersdorf, Ansgar; Münch, Jürgen; Rombach, Dieter

    Today, software-intensive systems are increasingly being developed in a globally distributed way. However, besides its benefit, global development also bears a set of risks and problems. One critical factor for successful project management of distributed software development is the allocation of tasks to sites, as this is assumed to have a major influence on the benefits and risks. We introduce a model that aims at improving management processes in globally distributed projects by giving decision support for task allocation that systematically regards multiple criteria. The criteria and causal relationships were identified in a literature study and refined in a qualitative interview study. The model uses existing approaches from distributed systems and statistical modeling. The article gives an overview of the problem and related work, introduces the empirical and theoretical foundations of the model, and shows the use of the model in an example scenario.

  10. A hidden analytic structure of the Rabi model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moroz, Alexander, E-mail: wavescattering@yahoo.com

    2014-01-15

    The Rabi model describes the simplest interaction between a cavity mode with a frequency ω{sub c} and a two-level system with a resonance frequency ω{sub 0}. It is shown here that the spectrum of the Rabi model coincides with the support of the discrete Stieltjes integral measure in the orthogonality relations of recently introduced orthogonal polynomials. The exactly solvable limit of the Rabi model corresponding to Δ=ω{sub 0}/(2ω{sub c})=0, which describes a displaced harmonic oscillator, is characterized by the discrete Charlier polynomials in normalized energy ϵ, which are orthogonal on an equidistant lattice. A non-zero value of Δ leads tomore » non-classical discrete orthogonal polynomials ϕ{sub k}(ϵ) and induces a deformation of the underlying equidistant lattice. The results provide a basis for a novel analytic method of solving the Rabi model. The number of ca. 1350 calculable energy levels per parity subspace obtained in double precision (cca 16 digits) by an elementary stepping algorithm is up to two orders of magnitude higher than is possible to obtain by Braak’s solution. Any first n eigenvalues of the Rabi model arranged in increasing order can be determined as zeros of ϕ{sub N}(ϵ) of at least the degree N=n+n{sub t}. The value of n{sub t}>0, which is slowly increasing with n, depends on the required precision. For instance, n{sub t}≃26 for n=1000 and dimensionless interaction constant κ=0.2, if double precision is required. Given that the sequence of the lth zeros x{sub nl}’s of ϕ{sub n}(ϵ)’s defines a monotonically decreasing discrete flow with increasing n, the Rabi model is indistinguishable from an algebraically solvable model in any finite precision. Although we can rigorously prove our results only for dimensionless interaction constant κ<1, numerics and exactly solvable example suggest that the main conclusions remain to be valid also for κ≥1. -- Highlights: •A significantly simplified analytic solution of the

  11. Analytical model for advective-dispersive transport involving flexible boundary inputs, initial distributions and zero-order productions

    NASA Astrophysics Data System (ADS)

    Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping

    2017-11-01

    A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.

  12. Task Prioritization in Dual-Tasking: Instructions versus Preferences

    PubMed Central

    Jansen, Reinier J.; van Egmond, René; de Ridder, Huib

    2016-01-01

    The role of task prioritization in performance tradeoffs during multi-tasking has received widespread attention. However, little is known on whether people have preferences regarding tasks, and if so, whether these preferences conflict with priority instructions. Three experiments were conducted with a high-speed driving game and an auditory memory task. In Experiment 1, participants did not receive priority instructions. Participants performed different sequences of single-task and dual-task conditions. Task performance was evaluated according to participants’ retrospective accounts on preferences. These preferences were reformulated as priority instructions in Experiments 2 and 3. The results showed that people differ in their preferences regarding task prioritization in an experimental setting, which can be overruled by priority instructions, but only after increased dual-task exposure. Additional measures of mental effort showed that performance tradeoffs had an impact on mental effort. The interpretation of these findings was used to explore an extension of Threaded Cognition Theory with Hockey’s Compensatory Control Model. PMID:27391779

  13. Numerical and analytic models of spontaneous frequency sweeping for energetic particle-driven Alfven eigenmodes

    NASA Astrophysics Data System (ADS)

    Wang, Ge; Berk, H. L.

    2011-10-01

    The frequency chirping signal arising from spontaneous a toroidial Alfven eigenmode (TAE) excited by energetic particles is studied for both numerical and analytic models. The time-dependent numerical model is based on the 1D Vlasov equation. We use a sophisticated tracking method to lock onto the resonant structure to enable the chirping frequency to be nearly constant in the calculation frame. The accuracy of the adiabatic approximation is tested during the simulation which justifies the appropriateness of our analytic model. The analytic model uses the adiabatic approximation which allows us to solve the wave evolution equation in frequency space. Then, the resonant interactions between energetic particles and TAE yield predictions for the chirping rate, wave frequency and amplitudes vs. time. Here, an adiabatic invariant J is defined on the separatrix of a chirping mode to determine the region of confinement of the wave trapped distribution function. We examine the asymptotic behavior of the chirping signal for its long time evolution and find agreement in essential features with the results of the simulation. Work supported by Department of Energy contract DE-FC02-08ER54988.

  14. Conditioning of Model Identification Task in Immune Inspired Optimizer SILO

    NASA Astrophysics Data System (ADS)

    Wojdan, K.; Swirski, K.; Warchol, M.; Maciorowski, M.

    2009-10-01

    Methods which provide good conditioning of model identification task in immune inspired, steady-state controller SILO (Stochastic Immune Layer Optimizer) are presented in this paper. These methods are implemented in a model based optimization algorithm. The first method uses a safe model to assure that gains of the process's model can be estimated. The second method is responsible for elimination of potential linear dependences between columns of observation matrix. Moreover new results from one of SILO implementation in polish power plant are presented. They confirm high efficiency of the presented solution in solving technical problems.

  15. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    PubMed Central

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-01-01

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753

  16. Individualized Cognitive Modeling for Close-Loop Task Mitigation

    NASA Technical Reports Server (NTRS)

    Zhang, Guangfan; Xu, Roger; Wang, Wei; Li, Jiang; Schnell, Tom; Keller, Mike

    2010-01-01

    An accurate real-time operator functional state assessment makes it possible to perform task management, minimize risks, and improve mission performance. In this paper, we discuss the development of an individualized operator functional state assessment model that identifies states likely leading to operational errors. To address large individual variations, we use two different approaches to build a model for each individual using its data as well as data from subjects with similar responses. If a subject's response is similar to that of the individual of interest in a specific functional state, all the training data from this subject will be used to build the individual model. The individualization methods have been successfully verified and validated with a driving test data set provided by University of Iowa. With the individualized models, the mean squared error can be significantly decreased (by around 20%).

  17. Untangling Slab Dynamics Using 3-D Numerical and Analytical Models

    NASA Astrophysics Data System (ADS)

    Holt, A. F.; Royden, L.; Becker, T. W.

    2016-12-01

    Increasingly sophisticated numerical models have enabled us to make significant strides in identifying the key controls on how subducting slabs deform. For example, 3-D models have demonstrated that subducting plate width, and the related strength of toroidal flow around the plate edge, exerts a strong control on both the curvature and the rate of migration of the trench. However, the results of numerical subduction models can be difficult to interpret, and many first order dynamics issues remain at least partially unresolved. Such issues include the dominant controls on trench migration, the interdependence of asthenospheric pressure and slab dynamics, and how nearby slabs influence each other's dynamics. We augment 3-D, dynamically evolving finite element models with simple, analytical force-balance models to distill the physics associated with subduction into more manageable parts. We demonstrate that for single, isolated subducting slabs much of the complexity of our fully numerical models can be encapsulated by simple analytical expressions. Rates of subduction and slab dip correlate strongly with the asthenospheric pressure difference across the subducting slab. For double subduction, an additional slab gives rise to more complex mantle pressure and flow fields, and significantly extends the range of plate kinematics (e.g., convergence rate, trench migration rate) beyond those present in single slab models. Despite these additional complexities, we show that much of the dynamics of such multi-slab systems can be understood using the physics illuminated by our single slab study, and that a force-balance method can be used to relate intra-plate stress to viscous pressure in the asthenosphere and coupling forces at plate boundaries. This method has promise for rapid modeling of large systems of subduction zones on a global scale.

  18. Numerical and analytical modeling of the end-loaded split (ELS) test specimens made of multi-directional coupled composite laminates

    NASA Astrophysics Data System (ADS)

    Samborski, Sylwester; Valvo, Paolo S.

    2018-01-01

    The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.

  19. Extraction, isolation, and purification of analytes from samples of marine origin--a multivariate task.

    PubMed

    Liguori, Lucia; Bjørsvik, Hans-René

    2012-12-01

    The development of a multivariate study for a quantitative analysis of six different polybrominated diphenyl ethers (PBDEs) in tissue of Atlantic Salmo salar L. is reported. An extraction, isolation, and purification process based on an accelerated solvent extraction system was designed, investigated, and optimized by means of statistical experimental design and multivariate data analysis and regression. An accompanying gas chromatography-mass spectrometry analytical method was developed for the identification and quantification of the analytes, BDE 28, BDE 47, BDE 99, BDE 100, BDE 153, and BDE 154. These PBDEs have been used in commercial blends that were used as flame-retardants for a variety of materials, including electronic devices, synthetic polymers and textiles. The present study revealed that an extracting solvent mixture composed of hexane and CH₂Cl₂ (10:90) provided excellent recoveries of all of the six PBDEs studied herein. A somewhat lower polarity in the extracting solvent, hexane and CH₂Cl₂ (40:60) decreased the analyte %-recoveries, which still remain acceptable and satisfactory. The study demonstrates the necessity to perform an intimately investigation of the extraction and purification process in order to achieve quantitative isolation of the analytes from the specific matrix. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. A Comparison of Reinforcement Learning Models for the Iowa Gambling Task Using Parameter Space Partitioning

    ERIC Educational Resources Information Center

    Steingroever, Helen; Wetzels, Ruud; Wagenmakers, Eric-Jan

    2013-01-01

    The Iowa gambling task (IGT) is one of the most popular tasks used to study decision-making deficits in clinical populations. In order to decompose performance on the IGT in its constituent psychological processes, several cognitive models have been proposed (e.g., the Expectancy Valence (EV) and Prospect Valence Learning (PVL) models). Here we…

  1. A latent discriminative model-based approach for classification of imaginary motor tasks from EEG data.

    PubMed

    Saa, Jaime F Delgado; Çetin, Müjdat

    2012-04-01

    We consider the problem of classification of imaginary motor tasks from electroencephalography (EEG) data for brain-computer interfaces (BCIs) and propose a new approach based on hidden conditional random fields (HCRFs). HCRFs are discriminative graphical models that are attractive for this problem because they (1) exploit the temporal structure of EEG; (2) include latent variables that can be used to model different brain states in the signal; and (3) involve learned statistical models matched to the classification task, avoiding some of the limitations of generative models. Our approach involves spatial filtering of the EEG signals and estimation of power spectra based on autoregressive modeling of temporal segments of the EEG signals. Given this time-frequency representation, we select certain frequency bands that are known to be associated with execution of motor tasks. These selected features constitute the data that are fed to the HCRF, parameters of which are learned from training data. Inference algorithms on the HCRFs are used for the classification of motor tasks. We experimentally compare this approach to the best performing methods in BCI competition IV as well as a number of more recent methods and observe that our proposed method yields better classification accuracy.

  2. Analytical performance specifications for external quality assessment - definitions and descriptions.

    PubMed

    Jones, Graham R D; Albarede, Stephanie; Kesseler, Dagmar; MacKenzie, Finlay; Mammen, Joy; Pedersen, Morten; Stavelin, Anne; Thelen, Marc; Thomas, Annette; Twomey, Patrick J; Ventura, Emma; Panteghini, Mauro

    2017-06-27

    External Quality Assurance (EQA) is vital to ensure acceptable analytical quality in medical laboratories. A key component of an EQA scheme is an analytical performance specification (APS) for each measurand that a laboratory can use to assess the extent of deviation of the obtained results from the target value. A consensus conference held in Milan in 2014 has proposed three models to set APS and these can be applied to setting APS for EQA. A goal arising from this conference is the harmonisation of EQA APS between different schemes to deliver consistent quality messages to laboratories irrespective of location and the choice of EQA provider. At this time there are wide differences in the APS used in different EQA schemes for the same measurands. Contributing factors to this variation are that the APS in different schemes are established using different criteria, applied to different types of data (e.g. single data points, multiple data points), used for different goals (e.g. improvement of analytical quality; licensing), and with the aim of eliciting different responses from participants. This paper provides recommendations from the European Federation of Laboratory Medicine (EFLM) Task and Finish Group on Performance Specifications for External Quality Assurance Schemes (TFG-APSEQA) and on clear terminology for EQA APS. The recommended terminology covers six elements required to understand APS: 1) a statement on the EQA material matrix and its commutability; 2) the method used to assign the target value; 3) the data set to which APS are applied; 4) the applicable analytical property being assessed (i.e. total error, bias, imprecision, uncertainty); 5) the rationale for the selection of the APS; and 6) the type of the Milan model(s) used to set the APS. The terminology is required for EQA participants and other interested parties to understand the meaning of meeting or not meeting APS.

  3. Heterogeneous fractionation profiles of meta-analytic coactivation networks.

    PubMed

    Laird, Angela R; Riedel, Michael C; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L; Eickhoff, Simon B; Smith, Stephen M; Fox, Peter T; Sutherland, Matthew T

    2017-04-01

    Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d=20-300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how "parent" functional brain systems decompose into constituent "child" sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Heterogeneous fractionation profiles of meta-analytic coactivation networks

    PubMed Central

    Laird, Angela R.; Riedel, Michael C.; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L.; Eickhoff, Simon B.; Smith, Stephen M.; Fox, Peter T.; Sutherland, Matthew T.

    2017-01-01

    Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d = 20 to 300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how “parent” functional brain systems decompose into constituent “child” sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. PMID:28222386

  5. Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.

    PubMed

    Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J

    2016-01-01

    Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.

  6. Model Analysis and Model Creation: Capturing the Task-Model Structure of Quantitative Item Domains. Research Report. ETS RR-06-11

    ERIC Educational Resources Information Center

    Deane, Paul; Graf, Edith Aurora; Higgins, Derrick; Futagi, Yoko; Lawless, René

    2006-01-01

    This study focuses on the relationship between item modeling and evidence-centered design (ECD); it considers how an appropriately generalized item modeling software tool can support systematic identification and exploitation of task-model variables, and then examines the feasibility of this goal, using linear-equation items as a test case. The…

  7. Semi-analytic modeling and simulation of magnetized liner inertial fusion

    NASA Astrophysics Data System (ADS)

    McBride, R. D.; Slutz, S. A.; Hansen, S. B.

    2013-10-01

    Presented is a semi-analytic model of magnetized liner inertial fusion (MagLIF). This model accounts for several key aspects of MagLIF, including: (1) pre-heat of the fuel; (2) pulsed-power-driven liner implosion; (3) liner compressibility with an analytic equation of state, artificial viscosity, and internal magnetic pressure and heating; (4) adiabatic compression and heating of the fuel; (5) radiative losses and fuel opacity; (6) magnetic flux compression with Nernst thermoelectric losses; (7) magnetized electron and ion thermal conduction losses; (8) deuterium-deuterium and deuterium-tritium primary fusion reactions; and (9) magnetized alpha-particle heating. We will first show that this simplified model, with its transparent and accessible physics, can be used to reproduce the general 1D behavior presented throughout the original MagLIF paper. We will then use this model to illustrate the MagLIF parameter space, energetics, and efficiencies, and to show the experimental challenges that we will likely be facing as we begin testing MagLIF using the infrastructure presently available at the Z facility. Finally, we will demonstrate how this scenario could likely change as various facility upgrades are made over the next three to five years and beyond. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. Collaborative Web-Enabled GeoAnalytics Applied to OECD Regional Data

    NASA Astrophysics Data System (ADS)

    Jern, Mikael

    Recent advances in web-enabled graphics technologies have the potential to make a dramatic impact on developing collaborative geovisual analytics (GeoAnalytics). In this paper, tools are introduced that help establish progress initiatives at international and sub-national levels aimed at measuring and collaborating, through statistical indicators, economic, social and environmental developments and to engage both statisticians and the public in such activities. Given this global dimension of such a task, the “dream” of building a repository of progress indicators, where experts and public users can use GeoAnalytics collaborative tools to compare situations for two or more countries, regions or local communities, could be accomplished. While the benefits of GeoAnalytics tools are many, it remains a challenge to adapt these dynamic visual tools to the Internet. For example, dynamic web-enabled animation that enables statisticians to explore temporal, spatial and multivariate demographics data from multiple perspectives, discover interesting relationships, share their incremental discoveries with colleagues and finally communicate selected relevant knowledge to the public. These discoveries often emerge through the diverse backgrounds and experiences of expert domains and are precious in a creative analytics reasoning process. In this context, we introduce a demonstrator “OECD eXplorer”, a customized tool for interactively analyzing, and collaborating gained insights and discoveries based on a novel story mechanism that capture, re-use and share task-related explorative events.

  9. The Relation between Students' Epistemological Understanding of Computer Models and Their Cognitive Processing on a Modelling Task

    ERIC Educational Resources Information Center

    Sins, Patrick H. M.; Savelsbergh, Elwin R.; van Joolingen, Wouter R.; van Hout-Wolters, Bernadette H. A. M.

    2009-01-01

    While many researchers in science education have argued that students' epistemological understanding of models and of modelling processes would influence their cognitive processing on a modelling task, there has been little direct evidence for such an effect. Therefore, this study aimed to investigate the relation between students' epistemological…

  10. An analytical model for regular respiratory signals derived from the probability density function of Rayleigh distribution.

    PubMed

    Li, Xin; Li, Ye

    2015-01-01

    Regular respiratory signals (RRSs) acquired with physiological sensing systems (e.g., the life-detection radar system) can be used to locate survivors trapped in debris in disaster rescue, or predict the breathing motion to allow beam delivery under free breathing conditions in external beam radiotherapy. Among the existing analytical models for RRSs, the harmonic-based random model (HRM) is shown to be the most accurate, which, however, is found to be subject to considerable error if the RRS has a slowly descending end-of-exhale (EOE) phase. The defect of the HRM motivates us to construct a more accurate analytical model for the RRS. In this paper, we derive a new analytical RRS model from the probability density function of Rayleigh distribution. We evaluate the derived RRS model by using it to fit a real-life RRS in the sense of least squares, and the evaluation result shows that, our presented model exhibits lower error and fits the slowly descending EOE phases of the real-life RRS better than the HRM.

  11. One-dimensional Analytical Modelling of Floating Seed Dispersal in Tidal Channels

    NASA Astrophysics Data System (ADS)

    Shi, W.; Purnama, A.; Shao, D.; Cui, B.; Gao, W.

    2017-12-01

    Seed dispersal is a primary factor influencing plant community development, and thus plays a critical role in maintaining wetland ecosystem functioning. However, compared with fluvial seed dispersal of riparian plants, dispersal of saltmarsh plant seeds in tidal channels is much less studied due to its complex behavior, and relevant mathematical modelling is particularly lacking. In this study, we developed a one-dimensional advection-dispersion model to explore the patterns of tidal seed dispersal. Oscillatory tidal current and water depth were assumed to represent the tidal effects. An exponential decay coefficient λ was introduced to account for seed deposition and retention. Analytical solution in integral form was derived using Green's function and further evaluated using numerical integration. The developed model was applied to simulate Spartina densiflora seed dispersal in a tidal channel located at the Mad River Slough in North Humboldt Bay, California, USA, to demonstrate its practical applicability. Model predictions agree satisfactorily with field observation and simulation results from Delft3D numerical model. Sensitivity analyses were also conducted to evaluate the effects of varying calibrated parameters on model predictions. The range of the seed dispersion as well as the distribution of the seed concentration were further analyzed through statistical parameters such as centroid displacement and variance of the seed cloud together with seed concentration contours. Implications of the modelling results on tidal marsh restoration and protection, e.g., revegetation through seed addition, were also discussed through scenario analysis. The developed analytical model provides a useful tool for ecological management of tidal marshes.

  12. A Model for Developing Clinical Analytics Capacity: Closing the Loops on Outcomes to Optimize Quality.

    PubMed

    Eggert, Corinne; Moselle, Kenneth; Protti, Denis; Sanders, Dale

    2017-01-01

    Closed Loop Analytics© is receiving growing interest in healthcare as a term referring to information technology, local data and clinical analytics working together to generate evidence for improvement. The Closed Loop Analytics model consists of three loops corresponding to the decision-making levels of an organization and the associated data within each loop - Patients, Protocols, and Populations. The authors propose that each of these levels should utilize the same ecosystem of electronic health record (EHR) and enterprise data warehouse (EDW) enabled data, in a closed-loop fashion, with that data being repackaged and delivered to suit the analytic and decision support needs of each level, in support of better outcomes.

  13. Goal orientation, perceived task outcome and task demands in mathematics tasks: effects on students' attitude in actual task settings.

    PubMed

    Seegers, Gerard; van Putten, Cornelis M; de Brabander, Cornelis J

    2002-09-01

    In earlier studies, it has been found that students' domain-specific cognitions and personal learning goals (goal orientation) influence task-specific appraisals of actual learning tasks. The relations between domain-specific and task-specific variables have been specified in the model of adaptive learning. In this study, additional influences, i.e., perceived task outcome on a former occasion and variations in task demands, were investigated. The purpose of this study was to identify personality and situational variables that mediate students' attitude when confronted with a mathematics task. Students worked on a mathematics task in two subsequent sessions. Effects of perceived task outcome at the first session on students' attitude at the second session were investigated. In addition, we investigated how differences in task demands influenced students' attitude. Variations in task demands were provoked by different conditions in task-instruction. In one condition, students were told that the result on the test would add to their mark on mathematics. This outcome orienting condition was contrasted with a task-orienting condition where students were told that the results on the test would not be used to give individual grades. Participants were sixth grade students (N = 345; aged 11-12 years) from 14 primary schools. Multivariate and univariate analyses of (co)variance were applied to the data. Independent variables were goal orientation, task demands, and perceived task outcome, with task-specific variables (estimated competence for the task, task attraction, task relevance, and willingness to invest effort) as the dependent variables. The results showed that previous perceived task outcome had a substantial impact on students' attitude. Additional but smaller effects were found for variation in task demands. Furthermore, effects of previous perceived task outcome and task demands were related to goal orientation. The resulting pattern confirmed that, in general

  14. Applicability of a 1D Analytical Model for Pulse Thermography of Laterally Heterogeneous Semitransparent Materials

    NASA Astrophysics Data System (ADS)

    Bernegger, R.; Altenburg, S. J.; Röllig, M.; Maierhofer, C.

    2018-03-01

    Pulse thermography (PT) has proven to be a valuable non-destructive testing method to identify and quantify defects in fiber-reinforced polymers. To perform a quantitative defect characterization, the heat diffusion within the material as well as the material parameters must be known. The heterogeneous material structure of glass fiber-reinforced polymers (GFRP) as well as the semitransparency of the material for optical excitation sources of PT is still challenging. For homogeneous semitransparent materials, 1D analytical models describing the temperature distribution are available. Here, we present an analytical approach to model PT for laterally inhomogeneous semitransparent materials. We show the validity of the model by considering different configurations of the optical heating source, the IR camera, and the differently coated GFRP sample. The model considers the lateral inhomogeneity of the semitransparency by an additional absorption coefficient. It includes additional effects such as thermal losses at the samples surfaces, multilayer systems with thermal contact resistance, and a finite duration of the heating pulse. By using a sufficient complexity of the analytical model, similar values of the material parameters were found for all six investigated configurations by numerical fitting.

  15. An Analytical Framework for Evaluating E-Commerce Business Models and Strategies.

    ERIC Educational Resources Information Center

    Lee, Chung-Shing

    2001-01-01

    Considers electronic commerce as a paradigm shift, or a disruptive innovation, and presents an analytical framework based on the theories of transaction costs and switching costs. Topics include business transformation process; scale effect; scope effect; new sources of revenue; and e-commerce value creation model and strategy. (LRW)

  16. Evaluation of gamma dose effect on PIN photodiode using analytical model

    NASA Astrophysics Data System (ADS)

    Jafari, H.; Feghhi, S. A. H.; Boorboor, S.

    2018-03-01

    The PIN silicon photodiodes are widely used in the applications which may be found in radiation environment such as space mission, medical imaging and non-destructive testing. Radiation-induced damage in these devices causes to degrade the photodiode parameters. In this work, we have used new approach to evaluate gamma dose effects on a commercial PIN photodiode (BPX65) based on an analytical model. In this approach, the NIEL parameter has been calculated for gamma rays from a 60Co source by GEANT4. The radiation damage mechanisms have been considered by solving numerically the Poisson and continuity equations with the appropriate boundary conditions, parameters and physical models. Defects caused by radiation in silicon have been formulated in terms of the damage coefficient for the minority carriers' lifetime. The gamma induced degradation parameters of the silicon PIN photodiode have been analyzed in detail and the results were compared with experimental measurements and as well as the results of ATLAS semiconductor simulator to verify and parameterize the analytical model calculations. The results showed reasonable agreement between them for BPX65 silicon photodiode irradiated by 60Co gamma source at total doses up to 5 kGy under different reverse voltages.

  17. Selecting a Response in Task Switching: Testing a Model of Compound Cue Retrieval

    ERIC Educational Resources Information Center

    Schneider, Darryl W.; Logan, Gordon D.

    2009-01-01

    How can a task-appropriate response be selected for an ambiguous target stimulus in task-switching situations? One answer is to use compound cue retrieval, whereby stimuli serve as joint retrieval cues to select a response from long-term memory. In the present study, the authors tested how well a model of compound cue retrieval could account for a…

  18. Bending elasticity of macromolecules: analytic predictions from the wormlike chain model.

    PubMed

    Polley, Anirban; Samuel, Joseph; Sinha, Supurna

    2013-01-01

    We present a study of the bend angle distribution of semiflexible polymers of short and intermediate lengths within the wormlike chain model. This enables us to calculate the elastic response of a stiff molecule to a bending moment. Our results go beyond the Hookean regime and explore the nonlinear elastic behavior of a single molecule. We present analytical formulas for the bend angle distribution and for the moment-angle relation. Our analytical study is compared against numerical Monte Carlo simulations. The functional forms derived here can be applied to fluorescence microscopic studies on actin and DNA. Our results are relevant to recent studies of "kinks" and cyclization in short and intermediate length DNA strands.

  19. Analytical Tools to Improve Optimization Procedures for Lateral Flow Assays

    PubMed Central

    Hsieh, Helen V.; Dantzler, Jeffrey L.; Weigl, Bernhard H.

    2017-01-01

    Immunochromatographic or lateral flow assays (LFAs) are inexpensive, easy to use, point-of-care medical diagnostic tests that are found in arenas ranging from a doctor’s office in Manhattan to a rural medical clinic in low resource settings. The simplicity in the LFA itself belies the complex task of optimization required to make the test sensitive, rapid and easy to use. Currently, the manufacturers develop LFAs by empirical optimization of material components (e.g., analytical membranes, conjugate pads and sample pads), biological reagents (e.g., antibodies, blocking reagents and buffers) and the design of delivery geometry. In this paper, we will review conventional optimization and then focus on the latter and outline analytical tools, such as dynamic light scattering and optical biosensors, as well as methods, such as microfluidic flow design and mechanistic models. We are applying these tools to find non-obvious optima of lateral flow assays for improved sensitivity, specificity and manufacturing robustness. PMID:28555034

  20. On the Decay of Correlations in Non-Analytic SO(n)-Symmetric Models

    NASA Astrophysics Data System (ADS)

    Naddaf, Ali

    We extend the method of complex translations which was originally employed by McBryan-Spencer [2] to obtain a decay rate for the two point function in two-dimensional SO(n)-symmetric models with non-analytic Hamiltonians for $.

  1. Correlating locations in ipsilateral breast tomosynthesis views using an analytical hemispherical compression model

    NASA Astrophysics Data System (ADS)

    van Schie, Guido; Tanner, Christine; Snoeren, Peter; Samulski, Maurice; Leifland, Karin; Wallis, Matthew G.; Karssemeijer, Nico

    2011-08-01

    To improve cancer detection in mammography, breast examinations usually consist of two views per breast. In order to combine information from both views, corresponding regions in the views need to be matched. In 3D digital breast tomosynthesis (DBT), this may be a difficult and time-consuming task for radiologists, because many slices have to be inspected individually. For multiview computer-aided detection (CAD) systems, matching corresponding regions is an essential step that needs to be automated. In this study, we developed an automatic method to quickly estimate corresponding locations in ipsilateral tomosynthesis views by applying a spatial transformation. First we match a model of a compressed breast to the tomosynthesis view containing a point of interest. Then we estimate the location of the corresponding point in the ipsilateral view by assuming that this model was decompressed, rotated and compressed again. In this study, we use a relatively simple, elastically deformable sphere model to obtain an analytical solution for the transformation in a given DBT case. We investigate three different methods to match the compression model to the data by using automatic segmentation of the pectoral muscle, breast tissue and nipple. For validation, we annotated 208 landmarks in both views of a total of 146 imaged breasts of 109 different patients and applied our method to each location. The best results are obtained by using the centre of gravity of the breast to define the central axis of the model, around which the breast is assumed to rotate between views. Results show a median 3D distance between the actual location and the estimated location of 14.6 mm, a good starting point for a registration method or a feature-based local search method to link suspicious regions in a multiview CAD system. Approximately half of the estimated locations are at most one slice away from the actual location, which makes the method useful as a mammographic workstation tool for

  2. A practical model of thin disk regenerative amplifier based on analytical expression of ASE lifetime

    NASA Astrophysics Data System (ADS)

    Zhou, Huang; Chyla, Michal; Nagisetty, Siva Sankar; Chen, Liyuan; Endo, Akira; Smrz, Martin; Mocek, Tomas

    2017-12-01

    In this paper, a practical model of a thin disk regenerative amplifier has been developed based on an analytical approach, in which Drew A. Copeland [1] had evaluated the loss rate of the upper state laser level due to ASE and derived the analytical expression of the effective life-time of the upper-state laser level by taking the Lorentzian stimulated emission line-shape and total internal reflection into account. By adopting the analytical expression of effective life-time in the rate equations, we have developed a less numerically intensive model for predicting and analyzing the performance of a thin disk regenerative amplifier. Thanks to the model, optimized combination of various parameters can be obtained to avoid saturation, period-doubling bifurcation or first pulse suppression prior to experiments. The effective life-time due to ASE is also analyzed against various parameters. The simulated results fit well with experimental data. By fitting more experimental results with numerical model, we can improve the parameters of the model, such as reflective factor which is used to determine the weight of boundary reflection within the influence of ASE. This practical model will be used to explore the scaling limits imposed by ASE of the thin disk regenerative amplifier being developed in HiLASE Centre.

  3. Semi-analytical solutions of the Schnakenberg model of a reaction-diffusion cell with feedback

    NASA Astrophysics Data System (ADS)

    Al Noufaey, K. S.

    2018-06-01

    This paper considers the application of a semi-analytical method to the Schnakenberg model of a reaction-diffusion cell. The semi-analytical method is based on the Galerkin method which approximates the original governing partial differential equations as a system of ordinary differential equations. Steady-state curves, bifurcation diagrams and the region of parameter space in which Hopf bifurcations occur are presented for semi-analytical solutions and the numerical solution. The effect of feedback control, via altering various concentrations in the boundary reservoirs in response to concentrations in the cell centre, is examined. It is shown that increasing the magnitude of feedback leads to destabilization of the system, whereas decreasing this parameter to negative values of large magnitude stabilizes the system. The semi-analytical solutions agree well with numerical solutions of the governing equations.

  4. Analytical model for a laminated shape memory alloy beam with piezoelectric layers

    NASA Astrophysics Data System (ADS)

    Viet, N. V.; Zaki, W.; Umer, R.

    2018-03-01

    We propose an analytical model for a laminated beam consisting of a superelastic shape memory alloy (SMA) core layer bonded to two piezoelectric layers on its top and bottom surfaces. The model accounts for forward and reverse phase transformation between austenite and martensite during a full isothermal loading-unloading cycle starting a full austenite in the SMA layer. In particular, the laminated composite beam has a rectangular cross section and is fixed at one end while the other end is subjected to a concentrated transverse force acting at the tip. The moment-curvature relation is analytically derived. The generated electric displacement output from the piezoelectric layers is then determined using the linear piezoelectric theory. The results are compared to 3D simulations using finite element analysis (FEA). The comparison shows good agreement in terms of electric displacement, in general, throughout the loading cycle.

  5. Towards an Analytical Age-Dependent Model of Contrast Sensitivity Functions for an Ageing Society

    PubMed Central

    Joulan, Karine; Brémond, Roland

    2015-01-01

    The Contrast Sensitivity Function (CSF) describes how the visibility of a grating depends on the stimulus spatial frequency. Many published CSF data have demonstrated that contrast sensitivity declines with age. However, an age-dependent analytical model of the CSF is not available to date. In this paper, we propose such an analytical CSF model based on visual mechanisms, taking into account the age factor. To this end, we have extended an existing model from Barten (1999), taking into account the dependencies of this model's optical and physiological parameters on age. Age-dependent models of the cones and ganglion cells densities, the optical and neural MTF, and optical and neural noise are proposed, based on published data. The proposed age-dependent CSF is finally tested against available experimental data, with fair results. Such an age-dependent model may be beneficial when designing real-time age-dependent image coding and display applications. PMID:26078994

  6. A Bayesian Multi-Level Factor Analytic Model of Consumer Price Sensitivities across Categories

    ERIC Educational Resources Information Center

    Duvvuri, Sri Devi; Gruca, Thomas S.

    2010-01-01

    Identifying price sensitive consumers is an important problem in marketing. We develop a Bayesian multi-level factor analytic model of the covariation among household-level price sensitivities across product categories that are substitutes. Based on a multivariate probit model of category incidence, this framework also allows the researcher to…

  7. Study of monopropellants for electrothermal thrusters: Analytical task summary report

    NASA Technical Reports Server (NTRS)

    Kuenzly, J. D.; Grabbi, R.

    1973-01-01

    The feasibility of operating small thrust level electrothermal thrusters is determined with monopropellants other than MIL-grade hydrazine. The work scope includes analytical study, design and fabrication of demonstration thrusters, and an evaluation test program where monopropellants with freezing points lower than MIL-grade hydrazine are evaluated and characterized to determine their applicability to electrothermal thrusters for spacecraft attitude control. Results of propellant chemistry studies and performance analyses indicated that the most promising candidate monopropellants to be investigated are monomethylhydrazine, Aerozine-50, 77% hydrazine-23% hydrazine azide blend, and TRW formulated mixed hydrazine monopropellant (MHM) consisting of 35% hydrazine-50% monomethylhydrazine-15% ammonia.

  8. Analytical model for three-dimensional Mercedes-Benz water molecules.

    PubMed

    Urbic, T

    2012-06-01

    We developed a statistical model which describes the thermal and volumetric properties of water-like molecules. A molecule is presented as a three-dimensional sphere with four hydrogen-bonding arms. Each water molecule interacts with its neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of a model developed before for a two-dimensional Mercedes-Benz model of water. We explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility as a function of temperature and pressure. We found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds upon increasing the temperature.

  9. Analytical model for three-dimensional Mercedes-Benz water molecules

    NASA Astrophysics Data System (ADS)

    Urbic, T.

    2012-06-01

    We developed a statistical model which describes the thermal and volumetric properties of water-like molecules. A molecule is presented as a three-dimensional sphere with four hydrogen-bonding arms. Each water molecule interacts with its neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of a model developed before for a two-dimensional Mercedes-Benz model of water. We explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility as a function of temperature and pressure. We found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds upon increasing the temperature.

  10. Analytical model for three-dimensional Mercedes-Benz water molecules

    PubMed Central

    Urbic, T.

    2013-01-01

    We developed a statistical model which describes the thermal and volumetric properties of water-like molecules. A molecule is presented as a three-dimensional sphere with four hydrogen-bonding arms. Each water molecule interacts with its neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of a model developed before for a two-dimensional Mercedes-Benz model of water. We explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility as a function of temperature and pressure. We found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds upon increasing the temperature. PMID:23005100

  11. Predictive classification of self-paced upper-limb analytical movements with EEG.

    PubMed

    Ibáñez, Jaime; Serrano, J I; del Castillo, M D; Minguez, J; Pons, J L

    2015-11-01

    The extent to which the electroencephalographic activity allows the characterization of movements with the upper limb is an open question. This paper describes the design and validation of a classifier of upper-limb analytical movements based on electroencephalographic activity extracted from intervals preceding self-initiated movement tasks. Features selected for the classification are subject specific and associated with the movement tasks. Further tests are performed to reject the hypothesis that other information different from the task-related cortical activity is being used by the classifiers. Six healthy subjects were measured performing self-initiated upper-limb analytical movements. A Bayesian classifier was used to classify among seven different kinds of movements. Features considered covered the alpha and beta bands. A genetic algorithm was used to optimally select a subset of features for the classification. An average accuracy of 62.9 ± 7.5% was reached, which was above the baseline level observed with the proposed methodology (30.2 ± 4.3%). The study shows how the electroencephalography carries information about the type of analytical movement performed with the upper limb and how it can be decoded before the movement begins. In neurorehabilitation environments, this information could be used for monitoring and assisting purposes.

  12. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    PubMed

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.

  13. Analytic Modeling of Pressurization and Cryogenic Propellant Conditions for Lunar Landing Vehicle

    NASA Technical Reports Server (NTRS)

    Corpening, Jeremy

    2010-01-01

    This slide presentation reviews the development, validation and application of the model to the Lunar Landing Vehicle. The model named, Computational Propellant and Pressurization Program -- One Dimensional (CPPPO), is used to model in this case cryogenic propellant conditions of the Altair Lunar lander. The validation of CPPPO was accomplished via comparison to an existing analytic model (i.e., ROCETS), flight experiment and ground experiments. The model was used to the Lunar Landing Vehicle perform a parametric analysis on pressurant conditions and to examine the results of unequal tank pressurization and draining for multiple tank designs.

  14. Spatial memory tasks in rodents: what do they model?

    PubMed

    Morellini, Fabio

    2013-10-01

    The analysis of spatial learning and memory in rodents is commonly used to investigate the mechanisms underlying certain forms of human cognition and to model their dysfunction in neuropsychiatric and neurodegenerative diseases. Proper interpretation of rodent behavior in terms of spatial memory and as a model of human cognitive functions is only possible if various navigation strategies and factors controlling the performance of the animal in a spatial task are taken into consideration. The aim of this review is to describe the experimental approaches that are being used for the study of spatial memory in rats and mice and the way that they can be interpreted in terms of general memory functions. After an introduction to the classification of memory into various categories and respective underlying neuroanatomical substrates, I explain the concept of spatial memory and its measurement in rats and mice by analysis of their navigation strategies. Subsequently, I describe the most common paradigms for spatial memory assessment with specific focus on methodological issues relevant for the correct interpretation of the results in terms of cognitive function. Finally, I present recent advances in the use of spatial memory tasks to investigate episodic-like memory in mice.

  15. Factors Affecting Higher Order Thinking Skills of Students: A Meta-Analytic Structural Equation Modeling Study

    ERIC Educational Resources Information Center

    Budsankom, Prayoonsri; Sawangboon, Tatsirin; Damrongpanit, Suntorapot; Chuensirimongkol, Jariya

    2015-01-01

    The purpose of the research is to develop and identify the validity of factors affecting higher order thinking skills (HOTS) of students. The thinking skills can be divided into three types: analytical, critical, and creative thinking. This analysis is done by applying the meta-analytic structural equation modeling (MASEM) based on a database of…

  16. A dual-loop model of the human controller in single-axis tracking tasks

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1977-01-01

    A dual loop model of the human controller in single axis compensatory tracking tasks is introduced. This model possesses an inner-loop closure which involves feeding back that portion of the controlled element output rate which is due to control activity. The sensory inputs to the human controller are assumed to be system error and control force. The former is assumed to be sensed via visual, aural, or tactile displays while the latter is assumed to be sensed in kinesthetic fashion. A nonlinear form of the model is briefly discussed. This model is then linearized and parameterized. A set of general adaptive characteristics for the parameterized model is hypothesized. These characteristics describe the manner in which the parameters in the linearized model will vary with such things as display quality. It is demonstrated that the parameterized model can produce controller describing functions which closely approximate those measured in laboratory tracking tasks for a wide variety of controlled elements.

  17. Analytical Modeling of a Double-Sided Flux Concentrating E-Core Transverse Flux Machine with Pole Windings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muljadi, Eduard; Hasan, Iftekhar; Husain, Tausif

    In this paper, a nonlinear analytical model based on the Magnetic Equivalent Circuit (MEC) method is developed for a double-sided E-Core Transverse Flux Machine (TFM). The proposed TFM has a cylindrical rotor, sandwiched between E-core stators on both sides. Ferrite magnets are used in the rotor with flux concentrating design to attain high airgap flux density, better magnet utilization, and higher torque density. The MEC model was developed using a series-parallel combination of flux tubes to estimate the reluctance network for different parts of the machine including air gaps, permanent magnets, and the stator and rotor ferromagnetic materials, in amore » two-dimensional (2-D) frame. An iterative Gauss-Siedel method is integrated with the MEC model to capture the effects of magnetic saturation. A single phase, 1 kW, 400 rpm E-Core TFM is analytically modeled and its results for flux linkage, no-load EMF, and generated torque, are verified with Finite Element Analysis (FEA). The analytical model significantly reduces the computation time while estimating results with less than 10 percent error.« less

  18. An evidence accumulation model for conflict detection performance in a simulated air traffic control task.

    PubMed

    Neal, Andrew; Kwantes, Peter J

    2009-04-01

    The aim of this article is to develop a formal model of conflict detection performance. Our model assumes that participants iteratively sample evidence regarding the state of the world and accumulate it over time. A decision is made when the evidence reaches a threshold that changes over time in response to the increasing urgency of the task. Two experiments were conducted to examine the effects of conflict geometry and timing on response proportions and response time. The model is able to predict the observed pattern of response times, including a nonmonotonic relationship between distance at point of closest approach and response time, as well as effects of angle of approach and relative velocity. The results demonstrate that evidence accumulation models provide a good account of performance on a conflict detection task. Evidence accumulation models are a form of dynamic signal detection theory, allowing for the analysis of response times as well as response proportions, and can be used for simulating human performance on dynamic decision tasks.

  19. Performance Enhancements Under Dual-task Conditions

    NASA Technical Reports Server (NTRS)

    Kramer, A. F.; Wickens, C. D.; Donchin, E.

    1984-01-01

    Research on dual-task performance has been concerned with delineating the antecedent conditions which lead to dual-task decrements. Capacity models of attention, which propose that a hypothetical resource structure underlies performance, have been employed as predictive devices. These models predict that tasks which require different processing resources can be more successfully time shared than tasks which require common resources. The conditions under which such dual-task integrality can be fostered were assessed in a study in which three factors likely to influence the integrality between tasks were manipulated: inter-task redundancy, the physical proximity of tasks and the task relevant objects. Twelve subjects participated in three experimental sessions in which they performed both single and dual-tasks. The primary task was a pursuit step tracking task. The secondary tasks required the discrimination between different intensities or different spatial positions of a stimulus. The results are discussed in terms of a model of dual-task integrality.

  20. Decision analytic models for Alzheimer's disease: state of the art and future directions.

    PubMed

    Cohen, Joshua T; Neumann, Peter J

    2008-05-01

    Decision analytic policy models for Alzheimer's disease (AD) enable researchers and policy makers to investigate questions about the costs and benefits of a wide range of existing and potential screening, testing, and treatment strategies. Such models permit analysts to compare existing alternatives, explore hypothetical scenarios, and test the strength of underlying assumptions in an explicit, quantitative, and systematic way. Decision analytic models can best be viewed as complementing clinical trials both by filling knowledge gaps not readily addressed by empirical research and by extrapolating beyond the surrogate markers recorded in a trial. We identified and critiqued 13 distinct AD decision analytic policy models published since 1997. Although existing models provide useful insights, they also have a variety of limitations. (1) They generally characterize disease progression in terms of cognitive function and do not account for other distinguishing features, such as behavioral symptoms, functional performance, and the emotional well-being of AD patients and caregivers. (2) Many describe disease progression in terms of a limited number of discrete states, thus constraining the level of detail that can be used to characterize both changes in patient status and the relationships between disease progression and other factors, such as residential status, that influence outcomes of interest. (3) They have focused almost exclusively on evaluating drug treatments, thus neglecting other disease management strategies and combinations of pharmacologic and nonpharmacologic interventions. Future AD models should facilitate more realistic and compelling evaluations of various interventions to address the disease. An improved model will allow decision makers to better characterize the disease, to better assess the costs and benefits of a wide range of potential interventions, and to better evaluate the incremental costs and benefits of specific interventions used in

  1. Identifying optimum performance trade-offs using a cognitively bounded rational analysis model of discretionary task interleaving.

    PubMed

    Janssen, Christian P; Brumby, Duncan P; Dowell, John; Chater, Nick; Howes, Andrew

    2011-01-01

    We report the results of a dual-task study in which participants performed a tracking and typing task under various experimental conditions. An objective payoff function was used to provide explicit feedback on how participants should trade off performance between the tasks. Results show that participants' dual-task interleaving strategy was sensitive to changes in the difficulty of the tracking task and resulted in differences in overall task performance. To test the hypothesis that people select strategies that maximize payoff, a Cognitively Bounded Rational Analysis model was developed. This analysis evaluated a variety of dual-task interleaving strategies to identify the optimal strategy for maximizing payoff in each condition. The model predicts that the region of optimum performance is different between experimental conditions. The correspondence between human data and the prediction of the optimal strategy is found to be remarkably high across a number of performance measures. This suggests that participants were honing their behavior to maximize payoff. Limitations are discussed. Copyright © 2011 Cognitive Science Society, Inc.

  2. ClimateSpark: An in-memory distributed computing framework for big climate data analytics

    NASA Astrophysics Data System (ADS)

    Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei

    2018-06-01

    The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.

  3. Developing Healthcare Data Analytics APPs with Open Data Science Tools.

    PubMed

    Hao, Bibo; Sun, Wen; Yu, Yiqin; Xie, Guotong

    2017-01-01

    Recent advances in big data analytics provide more flexible, efficient, and open tools for researchers to gain insight from healthcare data. Whilst many tools require researchers to develop programs with programming languages like Python, R and so on, which is not a skill set grasped by many researchers in the healthcare data analytics area. To make data science more approachable, we explored existing tools and developed a practice that can help data scientists convert existing analytics pipelines to user-friendly analytics APPs with rich interactions and features of real-time analysis. With this practice, data scientists can develop customized analytics pipelines as APPs in Jupyter Notebook and disseminate them to other researchers easily, and researchers can benefit from the shared notebook to perform analysis tasks or reproduce research results much more easily.

  4. Relating Adler's Life Tasks to Schutz's Interpersonal Model and the FIRO-B.

    ERIC Educational Resources Information Center

    Prendergast, Kathleen; Stone, Mark

    This paper integrates the interpersonal model of Schutz (1966) and Schutz's (1978) instrument for evaluating interpersonal relationships, FIRO-B (Fundamental Interpersonal Relationship Orientation-Behavior), with Adler's life tasks and typology. The paper begins with a description of Schutz's Interpersonal model in which Schutz, like Adler, views…

  5. Changes in Visual/Spatial and Analytic Strategy Use in Organic Chemistry with the Development of Expertise

    ERIC Educational Resources Information Center

    Vlacholia, Maria; Vosniadou, Stella; Roussos, Petros; Salta, Katerina; Kazi, Smaragda; Sigalas, Michael; Tzougraki, Chryssa

    2017-01-01

    We present two studies that investigated the adoption of visual/spatial and analytic strategies by individuals at different levels of expertise in the area of organic chemistry, using the Visual Analytic Chemistry Task (VACT). The VACT allows the direct detection of analytic strategy use without drawing inferences about underlying mental…

  6. Two dimensional analytical model for a reconfigurable field effect transistor

    NASA Astrophysics Data System (ADS)

    Ranjith, R.; Jayachandran, Remya; Suja, K. J.; Komaragiri, Rama S.

    2018-02-01

    This paper presents two-dimensional potential and current models for a reconfigurable field effect transistor (RFET). Two potential models which describe subthreshold and above-threshold channel potentials are developed by solving two-dimensional (2D) Poisson's equation. In the first potential model, 2D Poisson's equation is solved by considering constant/zero charge density in the channel region of the device to get the subthreshold potential characteristics. In the second model, accumulation charge density is considered to get above-threshold potential characteristics of the device. The proposed models are applicable for the device having lightly doped or intrinsic channel. While obtaining the mathematical model, whole body area is divided into two regions: gated region and un-gated region. The analytical models are compared with technology computer-aided design (TCAD) simulation results and are in complete agreement for different lengths of the gated regions as well as at various supply voltage levels.

  7. Cost and Schedule Analytical Techniques Development

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This Final Report summarizes the activities performed by Science Applications International Corporation (SAIC) under contract NAS 8-40431 "Cost and Schedule Analytical Techniques Development Contract" (CSATD) during Option Year 3 (December 1, 1997 through November 30, 1998). This Final Report is in compliance with Paragraph 5 of Section F of the contract. This CSATD contract provides technical products and deliverables in the form of parametric models, databases, methodologies, studies, and analyses to the NASA Marshall Space Flight Center's (MSFC) Engineering Cost Office (PP03) and the Program Plans and Requirements Office (PP02) and other user organizations. Detailed Monthly Reports were submitted to MSFC in accordance with the contract's Statement of Work, Section IV "Reporting and Documentation". These reports spelled out each month's specific work performed, deliverables submitted, major meetings conducted, and other pertinent information. Therefore, this Final Report will summarize these activities at a higher level. During this contract Option Year, SAIC expended 25,745 hours in the performance of tasks called out in the Statement of Work. This represents approximately 14 full-time EPs. Included are the Huntsville-based team, plus SAIC specialists in San Diego, Ames Research Center, Tampa, and Colorado Springs performing specific tasks for which they are uniquely qualified.

  8. Quantitative evaluation of analyte transport on microfluidic paper-based analytical devices (μPADs).

    PubMed

    Ota, Riki; Yamada, Kentaro; Suzuki, Koji; Citterio, Daniel

    2018-02-07

    The transport efficiency during capillary flow-driven sample transport on microfluidic paper-based analytical devices (μPADs) made from filter paper has been investigated for a selection of model analytes (Ni 2+ , Zn 2+ , Cu 2+ , PO 4 3- , bovine serum albumin, sulforhodamine B, amaranth) representing metal cations, complex anions, proteins and anionic molecules. For the first time, the transport of the analytical target compounds rather than the sample liquid, has been quantitatively evaluated by means of colorimetry and absorption spectrometry-based methods. The experiments have revealed that small paperfluidic channel dimensions, additional user operation steps (e.g. control of sample volume, sample dilution, washing step) as well as the introduction of sample liquid wicking areas allow to increase analyte transport efficiency. It is also shown that the interaction of analytes with the negatively charged cellulosic paper substrate surface is strongly influenced by the physico-chemical properties of the model analyte and can in some cases (Cu 2+ ) result in nearly complete analyte depletion during sample transport. The quantitative information gained through these experiments is expected to contribute to the development of more sensitive μPADs.

  9. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errorsmore » of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.« less

  10. Different macaque models of cognitive aging exhibit task-dependent behavioral disparities.

    PubMed

    Comrie, Alison E; Gray, Daniel T; Smith, Anne C; Barnes, Carol A

    2018-05-15

    Deficits in cognitive functions that rely on the integrity of the frontal and temporal lobes are characteristic of normative human aging. Due to similar aging phenotypes and homologous cortical organization between nonhuman primates and humans, several species of macaque monkeys are used as models to explore brain senescence. These macaque species are typically regarded as equivalent models of cognitive aging, yet no direct comparisons have been made to support this assumption. Here we used adult and aged rhesus and bonnet macaques (Macaca mulatta and Macaca radiata) to characterize the effect of age on acquisition and retention of information across delays in a battery of behavioral tasks that rely on prefrontal cortex and medial temporal lobe networks. The cognitive functions that were tested include visuospatial short-term memory, object recognition memory, and object-reward association memory. In general, bonnet macaques at all ages outperformed rhesus macaques on tasks thought to rely primarily on the prefrontal cortex, and were more resilient to age-related deficits in these behaviors. On the other hand, both species were comparably impaired by age on tasks thought to preferentially engage the medial temporal lobe. Together, these results suggest that rhesus and bonnet macaques are not equivalent models of cognitive aging and highlight the value of cross-species comparisons. These observations should enable improved design and interpretation of future experiments aimed at understanding changes in cognition across the lifespan. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Analytic Thermoelectric Couple Modeling: Variable Material Properties and Transient Operation

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan A.; Sehirlioglu, Alp; Dynys, Fred

    2015-01-01

    To gain a deeper understanding of the operation of a thermoelectric couple a set of analytic solutions have been derived for a variable material property couple and a transient couple. Using an analytic approach, as opposed to commonly used numerical techniques, results in a set of useful design guidelines. These guidelines can serve as useful starting conditions for further numerical studies, or can serve as design rules for lab built couples. The analytic modeling considers two cases and accounts for 1) material properties which vary with temperature and 2) transient operation of a couple. The variable material property case was handled by means of an asymptotic expansion, which allows for insight into the influence of temperature dependence on different material properties. The variable property work demonstrated the important fact that materials with identical average Figure of Merits can lead to different conversion efficiencies due to temperature dependence of the properties. The transient couple was investigated through a Greens function approach; several transient boundary conditions were investigated. The transient work introduces several new design considerations which are not captured by the classic steady state analysis. The work helps to assist in designing couples for optimal performance, and also helps assist in material selection.

  12. Task-based modeling and optimization of a cone-beam CT scanner for musculoskeletal imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prakash, P.; Zbijewski, W.; Gang, G. J.

    2011-10-15

    Purpose: This work applies a cascaded systems model for cone-beam CT imaging performance to the design and optimization of a system for musculoskeletal extremity imaging. The model provides a quantitative guide to the selection of system geometry, source and detector components, acquisition techniques, and reconstruction parameters. Methods: The model is based on cascaded systems analysis of the 3D noise-power spectrum (NPS) and noise-equivalent quanta (NEQ) combined with factors of system geometry (magnification, focal spot size, and scatter-to-primary ratio) and anatomical background clutter. The model was extended to task-based analysis of detectability index (d') for tasks ranging in contrast and frequencymore » content, and d' was computed as a function of system magnification, detector pixel size, focal spot size, kVp, dose, electronic noise, voxel size, and reconstruction filter to examine trade-offs and optima among such factors in multivariate analysis. The model was tested quantitatively versus the measured NPS and qualitatively in cadaver images as a function of kVp, dose, pixel size, and reconstruction filter under conditions corresponding to the proposed scanner. Results: The analysis quantified trade-offs among factors of spatial resolution, noise, and dose. System magnification (M) was a critical design parameter with strong effect on spatial resolution, dose, and x-ray scatter, and a fairly robust optimum was identified at M {approx} 1.3 for the imaging tasks considered. The results suggested kVp selection in the range of {approx}65-90 kVp, the lower end (65 kVp) maximizing subject contrast and the upper end maximizing NEQ (90 kVp). The analysis quantified fairly intuitive results--e.g., {approx}0.1-0.2 mm pixel size (and a sharp reconstruction filter) optimal for high-frequency tasks (bone detail) compared to {approx}0.4 mm pixel size (and a smooth reconstruction filter) for low-frequency (soft-tissue) tasks. This result suggests a specific

  13. Analytical model for investigation of interior noise characteristics in aircraft with multiple propellers including synchrophasing

    NASA Technical Reports Server (NTRS)

    Fuller, C. R.

    1986-01-01

    A simplified analytical model of transmission of noise into the interior of propeller-driven aircraft has been developed. The analysis includes directivity and relative phase effects of the propeller noise sources, and leads to a closed form solution for the coupled motion between the interior and exterior fields via the shell (fuselage) vibrational response. Various situations commonly encountered in considering sound transmission into aircraft fuselages are investigated analytically and the results obtained are compared to measurements in real aircraft. In general the model has proved successful in identifying basic mechanisms behind noise transmission phenomena.

  14. Enabling Big Geoscience Data Analytics with a Cloud-Based, MapReduce-Enabled and Service-Oriented Workflow Framework

    PubMed Central

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  15. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    PubMed

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  16. Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granderson, Jessica; Bonvini, Marco; Piette, Mary Ann

    We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and buildingmore » behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.« less

  17. Analytic model for ultrasound energy receivers and their optimal electric loads II: Experimental validation

    NASA Astrophysics Data System (ADS)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-10-01

    In this paper, we verify the two optimal electric load concepts based on the zero reflection condition and on the power maximization approach for ultrasound energy receivers. We test a high loss 1-3 composite transducer, and find that the measurements agree very well with the predictions of the analytic model for plate transducers that we have developed previously. Additionally, we also confirm that the power maximization and zero reflection loads are very different when the losses in the receiver are high. Finally, we compare the optimal load predictions by the KLM and the analytic models with frequency dependent attenuation to evaluate the influence of the viscosity.

  18. The Effects of Describing Antecedent Stimuli and Performance Criteria in Task Analysis Instruction for Graphing

    ERIC Educational Resources Information Center

    Tyner, Bryan C.; Fienup, Daniel M.

    2016-01-01

    Task analyses are ubiquitous to applied behavior analysis interventions, yet little is known about the factors that make them effective. Numerous task analyses have been published in behavior analytic journals for constructing single-subject design graphs; however, learner outcomes using these task analyses may fall short of what could be…

  19. A CCA+ICA based model for multi-task brain imaging data fusion and its application to schizophrenia.

    PubMed

    Sui, Jing; Adali, Tülay; Pearlson, Godfrey; Yang, Honghui; Sponheim, Scott R; White, Tonya; Calhoun, Vince D

    2010-05-15

    Collection of multiple-task brain imaging data from the same subject has now become common practice in medical imaging studies. In this paper, we propose a simple yet effective model, "CCA+ICA", as a powerful tool for multi-task data fusion. This joint blind source separation (BSS) model takes advantage of two multivariate methods: canonical correlation analysis and independent component analysis, to achieve both high estimation accuracy and to provide the correct connection between two datasets in which sources can have either common or distinct between-dataset correlation. In both simulated and real fMRI applications, we compare the proposed scheme with other joint BSS models and examine the different modeling assumptions. The contrast images of two tasks: sensorimotor (SM) and Sternberg working memory (SB), derived from a general linear model (GLM), were chosen to contribute real multi-task fMRI data, both of which were collected from 50 schizophrenia patients and 50 healthy controls. When examining the relationship with duration of illness, CCA+ICA revealed a significant negative correlation with temporal lobe activation. Furthermore, CCA+ICA located sensorimotor cortex as the group-discriminative regions for both tasks and identified the superior temporal gyrus in SM and prefrontal cortex in SB as task-specific group-discriminative brain networks. In summary, we compared the new approach to some competitive methods with different assumptions, and found consistent results regarding each of their hypotheses on connecting the two tasks. Such an approach fills a gap in existing multivariate methods for identifying biomarkers from brain imaging data.

  20. Electroencephalographic monitoring of complex mental tasks

    NASA Technical Reports Server (NTRS)

    Guisado, Raul; Montgomery, Richard; Montgomery, Leslie; Hickey, Chris

    1992-01-01

    Outlined here is the development of neurophysiological procedures to monitor operators during the performance of cognitive tasks. Our approach included the use of electroencepalographic (EEG) and rheoencephalographic (REG) techniques to determine changes in cortical function associated with cognition in the operator's state. A two channel tetrapolar REG, a single channel forearm impedance plethysmograph, a Lead I electrocardiogram (ECG) and a 21 channel EEG were used to measure subject responses to various visual-motor cognitive tasks. Testing, analytical, and display procedures for EEG and REG monitoring were developed that extend the state of the art and provide a valuable tool for the study of cerebral circulatory and neural activity during cognition.

  1. Learning Tasks, Peer Interaction, and Cognition Process: An Online Collaborative Design Model

    ERIC Educational Resources Information Center

    Du, Jianxia; Durrington, Vance A.

    2013-01-01

    This paper illustrates a model for Online Group Collaborative Learning. The authors based the foundation of the Online Collaborative Design Model upon Piaget's concepts of assimilation and accommodation, and Vygotsky's theory of social interaction. The four components of online collaborative learning include: individual processes, the task(s)…

  2. Class-modelling in food analytical chemistry: Development, sampling, optimisation and validation issues - A tutorial.

    PubMed

    Oliveri, Paolo

    2017-08-22

    Qualitative data modelling is a fundamental branch of pattern recognition, with many applications in analytical chemistry, and embraces two main families: discriminant and class-modelling methods. The first strategy is appropriate when at least two classes are meaningfully defined in the problem under study, while the second strategy is the right choice when the focus is on a single class. For this reason, class-modelling methods are also referred to as one-class classifiers. Although, in the food analytical field, most of the issues would be properly addressed by class-modelling strategies, the use of such techniques is rather limited and, in many cases, discriminant methods are forcedly used for one-class problems, introducing a bias in the outcomes. Key aspects related to the development, optimisation and validation of suitable class models for the characterisation of food products are critically analysed and discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. The Partners in Prevention Program: The Evaluation and Evolution of the Task-Centered Case Management Model

    ERIC Educational Resources Information Center

    Colvin, Julanne; Lee, Mingun; Magnano, Julienne; Smith, Valerie

    2008-01-01

    This article reports on the further development of the task-centered model for difficulties in school performance. We used Bailey-Dempsey and Reid's (1996) application of Rothman and Thomas's (1994) design and development framework and annual evaluations of the Partners in Prevention (PIP) Program to refine the task-centered case management model.…

  4. Big Data Analytics for Modelling and Forecasting of Geomagnetic Field Indices

    NASA Astrophysics Data System (ADS)

    Wei, H. L.

    2016-12-01

    A massive amount of data are produced and stored in research areas of space weather and space climate. However, the value of a vast majority of the data acquired every day may not be effectively or efficiently exploited in our daily practice when we try to forecast solar wind parameters and geomagnetic field indices using these recorded measurements or digital signals, probably due to the challenges stemming from the dealing with big data which are characterized by the 4V futures: volume (a massively large amount of data), variety (a great number of different types of data), velocity (a requirement of quick processing of the data), and veracity (the trustworthiness and usability of the data). In order to obtain more reliable and accurate predictive models for geomagnetic field indices, it requires that models should be developed from the big data analytics perspective (or it at least benefits from such a perspective). This study proposes a few data-based modelling frameworks which aim to produce more efficient predictive models for space weather parameters forecasting by means of system identification and big data analytics. More specifically, it aims to build more reliable mathematical models that characterise the relationship between solar wind parameters and geomagnetic filed indices, for example the dependent relationship of Dst and Kp indices on a few solar wind parameters and magnetic field indices, namely, solar wind velocity (V), southward interplanetary magnetic field (Bs), solar wind rectified electric field (VBs), and dynamic flow pressure (P). Examples are provided to illustrate how the proposed modelling approaches are applied to Dst and Kp index prediction.

  5. On the analytical modeling of the nonlinear vibrations of pretensioned space structures

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Belvin, W. K.

    1983-01-01

    Pretensioned structures are receiving considerable attention as candidate large space structures. A typical example is a hoop-column antenna. The large number of preloaded members requires efficient analytical methods for concept validation and design. Validation through analyses is especially important since ground testing may be limited due to gravity effects and structural size. The present investigation has the objective to present an examination of the analytical modeling of pretensioned members undergoing nonlinear vibrations. Two approximate nonlinear analysis are developed to model general structural arrangements which include beam-columns and pretensioned cables attached to a common nucleus, such as may occur at a joint of a pretensioned structure. Attention is given to structures undergoing nonlinear steady-state oscillations due to sinusoidal excitation forces. Three analyses, linear, quasi-linear, and nonlinear are conducted and applied to study the response of a relatively simple cable stiffened structure.

  6. A set for relational reasoning: Facilitation of algebraic modeling by a fraction task.

    PubMed

    DeWolf, Melissa; Bassok, Miriam; Holyoak, Keith J

    2016-12-01

    Recent work has identified correlations between early mastery of fractions and later math achievement, especially in algebra. However, causal connections between aspects of reasoning with fractions and improved algebra performance have yet to be established. The current study investigated whether relational reasoning with fractions facilitates subsequent algebraic reasoning using both pre-algebra students and adult college students. Participants were first given either a relational reasoning fractions task or a fraction algebra procedures control task. Then, all participants solved word problems and constructed algebraic equations in either multiplication or division format. The word problems and the equation construction tasks involved simple multiplicative comparison statements such as "There are 4 times as many students as teachers in a classroom." Performance on the algebraic equation construction task was enhanced for participants who had previously completed the relational fractions task compared with those who completed the fraction algebra procedures task. This finding suggests that relational reasoning with fractions can establish a relational set that promotes students' tendency to model relations using algebraic expressions. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    PubMed

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the

  8. Analytical model of diffuse reflectance spectrum of skin tissue

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.; Kugeiko, M. M.; Firago, V. A.; Sobchuk, A. N.

    2014-01-01

    We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions.

  9. An analytical model of iceberg drift

    NASA Astrophysics Data System (ADS)

    Eisenman, I.; Wagner, T. J. W.; Dell, R.

    2017-12-01

    Icebergs transport freshwater from glaciers and ice shelves, releasing the freshwater into the upper ocean thousands of kilometers from the source. This influences ocean circulation through its effect on seawater density. A standard empirical rule-of-thumb for estimating iceberg trajectories is that they drift at the ocean surface current velocity plus 2% of the atmospheric surface wind velocity. This relationship has been observed in empirical studies for decades, but it has never previously been physically derived or justified. In this presentation, we consider the momentum balance for an individual iceberg, which includes nonlinear drag terms. Applying a series of approximations, we derive an analytical solution for the iceberg velocity as a function of time. In order to validate the model, we force it with surface velocity and temperature data from an observational state estimate and compare the results with iceberg observations in both hemispheres. We show that the analytical solution reduces to the empirical 2% relationship in the asymptotic limit of small icebergs (or strong winds), which approximately applies for typical Arctic icebergs. We find that the 2% value arises due to a term involving the drag coefficients for water and air and the densities of the iceberg, ocean, and air. In the opposite limit of large icebergs (or weak winds), which approximately applies for typical Antarctic icebergs with horizontal length scales greater than about 12 km, we find that the 2% relationship is not applicable and that icebergs instead move with the ocean current, unaffected by the wind. The two asymptotic regimes can be understood by considering how iceberg size influences the relative importance of the wind and ocean current drag terms compared with the Coriolis and pressure gradient force terms in the iceberg momentum balance.

  10. An analytical drain current model for symmetric double-gate MOSFETs

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Huang, Gongyi; Lin, Wei; Xu, Chuanzhong

    2018-04-01

    An analytical surface-potential-based drain current model of symmetric double-gate (sDG) MOSFETs is described as a SPICE compatible model in this paper. The continuous surface and central potentials from the accumulation to the strong inversion regions are solved from the 1-D Poisson's equation in sDG MOSFETs. Furthermore, the drain current is derived from the charge sheet model as a function of the surface potential. Over a wide range of terminal voltages, doping concentrations, and device geometries, the surface potential calculation scheme and drain current model are verified by solving the 1-D Poisson's equation based on the least square method and using the Silvaco Atlas simulation results and experimental data, respectively. Such a model can be adopted as a useful platform to develop the circuit simulator and provide the clear understanding of sDG MOSFET device physics.

  11. Task conflict and proactive control: A computational theory of the Stroop task.

    PubMed

    Kalanthroff, Eyal; Davelaar, Eddy J; Henik, Avishai; Goldfarb, Liat; Usher, Marius

    2018-01-01

    The Stroop task is a central experimental paradigm used to probe cognitive control by measuring the ability of participants to selectively attend to task-relevant information and inhibit automatic task-irrelevant responses. Research has revealed variability in both experimental manipulations and individual differences. Here, we focus on a particular source of Stroop variability, the reverse-facilitation (RF; faster responses to nonword neutral stimuli than to congruent stimuli), which has recently been suggested as a signature of task conflict. We first review the literature that shows RF variability in the Stroop task, both with regard to experimental manipulations and to individual differences. We suggest that task conflict variability can be understood as resulting from the degree of proactive control that subjects recruit in advance of the Stroop stimulus. When the proactive control is high, task conflict does not arise (or is resolved very quickly), resulting in regular Stroop facilitation. When proactive control is low, task conflict emerges, leading to a slow-down in congruent and incongruent (but not in neutral) trials and thus to Stroop RF. To support this suggestion, we present a computational model of the Stroop task, which includes the resolution of task conflict and its modulation by proactive control. Results show that our model (a) accounts for the variability in Stroop-RF reported in the experimental literature, and (b) solves a challenge to previous Stroop models-their ability to account for reaction time distributional properties. Finally, we discuss theoretical implications to Stroop measures and control deficits observed in some psychopathologies. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. Hybrid experimental/analytical models of structural dynamics - Creation and use for predictions

    NASA Technical Reports Server (NTRS)

    Balmes, Etienne

    1993-01-01

    An original complete methodology for the construction of predictive models of damped structural vibrations is introduced. A consistent definition of normal and complex modes is given which leads to an original method to accurately identify non-proportionally damped normal mode models. A new method to create predictive hybrid experimental/analytical models of damped structures is introduced, and the ability of hybrid models to predict the response to system configuration changes is discussed. Finally a critical review of the overall methodology is made by application to the case of the MIT/SERC interferometer testbed.

  13. Development of an analytical-numerical model to predict radiant emission or absorption

    NASA Technical Reports Server (NTRS)

    Wallace, Tim L.

    1994-01-01

    The development of an analytical-numerical model to predict radiant emission or absorption is discussed. A voigt profile is assumed to predict the spectral qualities of a singlet atomic transition line for atomic species of interest to the OPAD program. The present state of this model is described in each progress report required under contract. Model and code development is guided by experimental data where available. When completed, the model will be used to provide estimates of specie erosion rates from spectral data collected from rocket exhaust plumes or other sources.

  14. Development of collaborative-creative learning model using virtual laboratory media for instrumental analytical chemistry lectures

    NASA Astrophysics Data System (ADS)

    Zurweni, Wibawa, Basuki; Erwin, Tuti Nurian

    2017-08-01

    The framework for teaching and learning in the 21st century was prepared with 4Cs criteria. Learning providing opportunity for the development of students' optimal creative skills is by implementing collaborative learning. Learners are challenged to be able to compete, work independently to bring either individual or group excellence and master the learning material. Virtual laboratory is used for the media of Instrumental Analytical Chemistry (Vis, UV-Vis-AAS etc) lectures through simulations computer application and used as a substitution for the laboratory if the equipment and instruments are not available. This research aims to design and develop collaborative-creative learning model using virtual laboratory media for Instrumental Analytical Chemistry lectures, to know the effectiveness of this design model adapting the Dick & Carey's model and Hannafin & Peck's model. The development steps of this model are: needs analyze, design collaborative-creative learning, virtual laboratory media using macromedia flash, formative evaluation and test of learning model effectiveness. While, the development stages of collaborative-creative learning model are: apperception, exploration, collaboration, creation, evaluation, feedback. Development of collaborative-creative learning model using virtual laboratory media can be used to improve the quality learning in the classroom, overcome the limitation of lab instruments for the real instrumental analysis. Formative test results show that the Collaborative-Creative Learning Model developed meets the requirements. The effectiveness test of students' pretest and posttest proves significant at 95% confidence level, t-test higher than t-table. It can be concluded that this learning model is effective to use for Instrumental Analytical Chemistry lectures.

  15. Hierarchical analytical and simulation modelling of human-machine systems with interference

    NASA Astrophysics Data System (ADS)

    Braginsky, M. Ya; Tarakanov, D. V.; Tsapko, S. G.; Tsapko, I. V.; Baglaeva, E. A.

    2017-01-01

    The article considers the principles of building the analytical and simulation model of the human operator and the industrial control system hardware and software. E-networks as the extension of Petri nets are used as the mathematical apparatus. This approach allows simulating complex parallel distributed processes in human-machine systems. The structural and hierarchical approach is used as the building method for the mathematical model of the human operator. The upper level of the human operator is represented by the logical dynamic model of decision making based on E-networks. The lower level reflects psychophysiological characteristics of the human-operator.

  16. Comprehensive analytical model for locally contacted rear surface passivated solar cells

    NASA Astrophysics Data System (ADS)

    Wolf, Andreas; Biro, Daniel; Nekarda, Jan; Stumpp, Stefan; Kimmerle, Achim; Mack, Sebastian; Preu, Ralf

    2010-12-01

    For optimum performance of solar cells featuring a locally contacted rear surface, the metallization fraction as well as the size and distribution of the local contacts are crucial, since Ohmic and recombination losses have to be balanced. In this work we present a set of equations which enable to calculate this trade off without the need of numerical simulations. Our model combines established analytical and empirical equations to predict the energy conversion efficiency of a locally contacted device. For experimental verification, we fabricate devices from float zone silicon wafers of different resistivity using the laser fired contact technology for forming the local rear contacts. The detailed characterization of test structures enables the determination of important physical parameters, such as the surface recombination velocity at the contacted area and the spreading resistance of the contacts. Our analytical model reproduces the experimental results very well and correctly predicts the optimum contact spacing without the use of free fitting parameters. We use our model to estimate the optimum bulk resistivity for locally contacted devices fabricated from conventional Czochralski-grown silicon material. These calculations use literature values for the stable minority carrier lifetime to account for the bulk recombination caused by the formation of boron-oxygen complexes under carrier injection.

  17. Galactic conformity measured in semi-analytic models

    NASA Astrophysics Data System (ADS)

    Lacerna, I.; Contreras, S.; González, R. E.; Padilla, N.; Gonzalez-Perez, V.

    2018-03-01

    We study the correlation between the specific star formation rate of central galaxies and neighbour galaxies, also known as `galactic conformity', out to 20 h^{-1} {Mpc} using three semi-analytic models (SAMs, one from L-GALAXIES and other two from GALFORM). The aim is to establish whether SAMs are able to show galactic conformity using different models and selection criteria. In all the models, when the selection of primary galaxies is based on an isolation criterion in real space, the mean fraction of quenched (Q) galaxies around Q primary galaxies is higher than that around star-forming primary galaxies of the same stellar mass. The overall signal of conformity decreases when we remove satellites selected as primary galaxies, but the effect is much stronger in GALFORM models compared with the L-GALAXIES model. We find this difference is partially explained by the fact that in GALFORM once a galaxy becomes a satellite remains as such, whereas satellites can become centrals at a later time in L-GALAXIES. The signal of conformity decreases down to 60 per cent in the L-GALAXIES model after removing central galaxies that were ejected from their host halo in the past. Galactic conformity is also influenced by primary galaxies at fixed stellar mass that reside in dark matter haloes of different masses. Finally, we explore a proxy of conformity between distinct haloes. In this case, the conformity is weak beyond ˜3 h^{-1} {Mpc} (<3 per cent in L-GALAXIES, <1-2 per cent in GALFORM models). Therefore, it seems difficult that conformity is directly related with a long-range effect.

  18. A simple analytical model for signal amplification by reversible exchange (SABRE) process.

    PubMed

    Barskiy, Danila A; Pravdivtsev, Andrey N; Ivanov, Konstantin L; Kovtunov, Kirill V; Koptyug, Igor V

    2016-01-07

    We demonstrate an analytical model for the description of the signal amplification by reversible exchange (SABRE) process. The model relies on a combined analysis of chemical kinetics and the evolution of the nuclear spin system during the hyperpolarization process. The presented model for the first time provides rationale for deciding which system parameters (i.e. J-couplings, relaxation rates, reaction rate constants) have to be optimized in order to achieve higher signal enhancement for a substrate of interest in SABRE experiments.

  19. Familiarity Vs Trust: A Comparative Study of Domain Scientists' Trust in Visual Analytics and Conventional Analysis Methods.

    PubMed

    Dasgupta, Aritra; Lee, Joon-Yong; Wilson, Ryan; Lafrance, Robert A; Cramer, Nick; Cook, Kristin; Payne, Samuel

    2017-01-01

    Combining interactive visualization with automated analytical methods like statistics and data mining facilitates data-driven discovery. These visual analytic methods are beginning to be instantiated within mixed-initiative systems, where humans and machines collaboratively influence evidence-gathering and decision-making. But an open research question is that, when domain experts analyze their data, can they completely trust the outputs and operations on the machine-side? Visualization potentially leads to a transparent analysis process, but do domain experts always trust what they see? To address these questions, we present results from the design and evaluation of a mixed-initiative, visual analytics system for biologists, focusing on analyzing the relationships between familiarity of an analysis medium and domain experts' trust. We propose a trust-augmented design of the visual analytics system, that explicitly takes into account domain-specific tasks, conventions, and preferences. For evaluating the system, we present the results of a controlled user study with 34 biologists where we compare the variation of the level of trust across conventional and visual analytic mediums and explore the influence of familiarity and task complexity on trust. We find that despite being unfamiliar with a visual analytic medium, scientists seem to have an average level of trust that is comparable with the same in conventional analysis medium. In fact, for complex sense-making tasks, we find that the visual analytic system is able to inspire greater trust than other mediums. We summarize the implications of our findings with directions for future research on trustworthiness of visual analytic systems.

  20. Analytical model for atomic resonant attosecond transient absorption

    NASA Astrophysics Data System (ADS)

    Cariker, C.; Kjellson, T.; Lindroth, E.; Argenti, L.

    2017-04-01

    Recent advancements in ultrafast laser technology have made it possible to probe electron dynamics in highly excited atomic states that autoionize on a femtosecond timescale, thus giving insight into the dynamics of Auger decay and its interference with the continuum. These experiments provide a stringent test for time-resolved analytical models of autoionization. Here we present a finite-pulse, multi-photon perturbative model which is used in conjunction with ab-initio structure calculations to predict the attosecond transient absorption spectrum (ATAS) of an atom above the ionization threshold. We apply this model to compute the ATAS of argon in the vicinity of the 3s-1 4 p resonance as a function of the time delay between an extreme ultraviolet (XUV) and an infrared (IR) pulse, as well as of the angle between their polarization. We show that by modulating the parameters of the IR pulse it is possible to control the dipolar coupling between neighboring states and hence the lineshape of the 3s-1 4 p resonance. NSF Grant No. 1607588.

  1. A developed nearly analytic discrete method for forward modeling in the frequency domain

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai

    2018-02-01

    High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.

  2. Method of and apparatus for determining the similarity of a biological analyte from a model constructed from known biological fluids

    DOEpatents

    Robinson, Mark R.; Ward, Kenneth J.; Eaton, Robert P.; Haaland, David M.

    1990-01-01

    The characteristics of a biological fluid sample having an analyte are determined from a model constructed from plural known biological fluid samples. The model is a function of the concentration of materials in the known fluid samples as a function of absorption of wideband infrared energy. The wideband infrared energy is coupled to the analyte containing sample so there is differential absorption of the infrared energy as a function of the wavelength of the wideband infrared energy incident on the analyte containing sample. The differential absorption causes intensity variations of the infrared energy incident on the analyte containing sample as a function of sample wavelength of the energy, and concentration of the unknown analyte is determined from the thus-derived intensity variations of the infrared energy as a function of wavelength from the model absorption versus wavelength function.

  3. Analytical and experimental investigation of a 1/8-scale dynamic model of the shuttle orbiter. Volume 2: Technical report

    NASA Technical Reports Server (NTRS)

    Mason, P. W.; Harris, H. G.; Zalesak, J.; Bernstein, M.

    1974-01-01

    The methods and procedures used in the analysis and testing of the scale model are reported together with the correlation of the analytical and experimental results. The model, the NASTRAN finite element analysis, and results are discussed. Tests and analytical investigations are also reported.

  4. A Semi-Analytical Model for Dispersion Modelling Studies in the Atmospheric Boundary Layer

    NASA Astrophysics Data System (ADS)

    Gupta, A.; Sharan, M.

    2017-12-01

    The severe impact of harmful air pollutants has always been a cause of concern for a wide variety of air quality analysis. The analytical models based on the solution of the advection-diffusion equation have been the first and remain the convenient way for modeling air pollutant dispersion as it is easy to handle the dispersion parameters and related physics in it. A mathematical model describing the crosswind integrated concentration is presented. The analytical solution to the resulting advection-diffusion equation is limited to a constant and simple profiles of eddy diffusivity and wind speed. In practice, the wind speed depends on the vertical height above the ground and eddy diffusivity profiles on the downwind distance from the source as well as the vertical height. In the present model, a method of eigen-function expansion is used to solve the resulting partial differential equation with the appropriate boundary conditions. This leads to a system of first order ordinary differential equations with a coefficient matrix depending on the downwind distance. The solution of this system, in general, can be expressed in terms of Peano-baker series which is not easy to compute, particularly when the coefficient matrix becomes non-commutative (Martin et al., 1967). An approach based on Taylor's series expansion is introduced to find the numerical solution of first order system. The method is applied to various profiles of wind speed and eddy diffusivities. The solution computed from the proposed methodology is found to be efficient and accurate in comparison to those available in the literature. The performance of the model is evaluated with the diffusion datasets from Copenhagen (Gryning et al., 1987) and Hanford (Doran et al., 1985). In addition, the proposed method is used to deduce three dimensional concentrations by considering the Gaussian distribution in crosswind direction, which is also evaluated with diffusion data corresponding to a continuous point source.

  5. ENVIRONMENTAL RESEARCH BRIEF : ANALYTIC ELEMENT MODELING OF GROUND-WATER FLOW AND HIGH PERFORMANCE COMPUTING

    EPA Science Inventory

    Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...

  6. A model for dynamic allocation of human attention among multiple tasks

    NASA Technical Reports Server (NTRS)

    Sheridan, T. B.; Tulga, M. K.

    1978-01-01

    The problem of multi-task attention allocation with special reference to aircraft piloting is discussed with the experimental paradigm used to characterize this situation and the experimental results obtained in the first phase of the research. A qualitative description of an approach to mathematical modeling, and some results obtained with it are also presented to indicate what aspects of the model are most promising. Two appendices are given which (1) discuss the model in relation to graph theory and optimization and (2) specify the optimization algorithm of the model.

  7. A simple analytical infiltration model for short-duration rainfall

    NASA Astrophysics Data System (ADS)

    Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming

    2017-12-01

    Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.

  8. Star formation in Herschel's Monsters versus semi-analytic models

    NASA Astrophysics Data System (ADS)

    Gruppioni, C.; Calura, F.; Pozzi, F.; Delvecchio, I.; Berta, S.; De Lucia, G.; Fontanot, F.; Franceschini, A.; Marchetti, L.; Menci, N.; Monaco, P.; Vaccari, M.

    2015-08-01

    We present a direct comparison between the observed star formation rate functions (SFRFs) and the state-of-the-art predictions of semi-analytic models (SAMs) of galaxy formation and evolution. We use the PACS Evolutionary Probe Survey and Herschel Multi-tiered Extragalactic Survey data sets in the COSMOS and GOODS-South fields, combined with broad-band photometry from UV to sub-mm, to obtain total (IR+UV) instantaneous star formation rates (SFRs) for individual Herschel galaxies up to z ˜ 4, subtracted of possible active galactic nucleus (AGN) contamination. The comparison with model predictions shows that SAMs broadly reproduce the observed SFRFs up to z ˜ 2, when the observational errors on the SFR are taken into account. However, all the models seem to underpredict the bright end of the SFRF at z ≳ 2. The cause of this underprediction could lie in an improper modelling of several model ingredients, like too strong (AGN or stellar) feedback in the brighter objects or too low fallback of gas, caused by weak feedback and outflows at earlier epochs.

  9. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-07

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  10. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  11. Automated Trait Scores for "GRE"® Writing Tasks. Research Report. ETS RR-15-15

    ERIC Educational Resources Information Center

    Attali, Yigal; Sinharay, Sandip

    2015-01-01

    The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…

  12. Does overgeneral autobiographical memory result from poor memory for task instructions?

    PubMed

    Yanes, Paula K; Roberts, John E; Carlos, Erica L

    2008-10-01

    Considerable previous research has shown that retrieval of overgeneral autobiographical memories (OGM) is elevated among individuals suffering from various emotional disorders and those with a history of trauma. Although previous theories suggest that OGM serves the function of regulating acute negative affect, it is also possible that OGM results from difficulties in keeping the instruction set for the Autobiographical Memory Test (AMT) in working memory, or what has been coined "secondary goal neglect" (Dalgleish, 2004). The present study tested whether OGM is associated with poor memory for the task's instruction set, and whether an instruction set reminder would improve memory specificity over repeated trials. Multilevel modelling data-analytic techniques demonstrated a significant relationship between poor recall of instruction set and probability of retrieving OGMs. Providing an instruction set reminder for the AMT relative to a control task's instruction set improved memory specificity immediately afterward.

  13. Multiple piezo-patch energy harvesters integrated to a thin plate with AC-DC conversion: analytical modeling and numerical validation

    NASA Astrophysics Data System (ADS)

    Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper

    2016-04-01

    Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.

  14. Predictive Analytical Model for Isolator Shock-Train Location in a Mach 2.2 Direct-Connect Supersonic Combustion Tunnel

    NASA Astrophysics Data System (ADS)

    Lingren, Joe; Vanstone, Leon; Hashemi, Kelley; Gogineni, Sivaram; Donbar, Jeffrey; Akella, Maruthi; Clemens, Noel

    2016-11-01

    This study develops an analytical model for predicting the leading shock of a shock-train in the constant area isolator section in a Mach 2.2 direct-connect scramjet simulation tunnel. The effective geometry of the isolator is assumed to be a weakly converging duct owing to boundary-layer growth. For some given pressure rise across the isolator, quasi-1D equations relating to isentropic or normal shock flows can be used to predict the normal shock location in the isolator. The surface pressure distribution through the isolator was measured during experiments and both the actual and predicted locations can be calculated. Three methods of finding the shock-train location are examined, one based on the measured pressure rise, one using a non-physics-based control model, and one using the physics-based analytical model. It is shown that the analytical model performs better than the non-physics-based model in all cases. The analytic model is less accurate than the pressure threshold method but requires significantly less information to compute. In contrast to other methods for predicting shock-train location, this method is relatively accurate and requires as little as a single pressure measurement. This makes this method potentially useful for unstart control applications.

  15. Fourier modeling of the BOLD response to a breath-hold task: Optimization and reproducibility.

    PubMed

    Pinto, Joana; Jorge, João; Sousa, Inês; Vilela, Pedro; Figueiredo, Patrícia

    2016-07-15

    Cerebrovascular reactivity (CVR) reflects the capacity of blood vessels to adjust their caliber in order to maintain a steady supply of brain perfusion, and it may provide a sensitive disease biomarker. Measurement of the blood oxygen level dependent (BOLD) response to a hypercapnia-inducing breath-hold (BH) task has been frequently used to map CVR noninvasively using functional magnetic resonance imaging (fMRI). However, the best modeling approach for the accurate quantification of CVR maps remains an open issue. Here, we compare and optimize Fourier models of the BOLD response to a BH task with a preparatory inspiration, and assess the test-retest reproducibility of the associated CVR measurements, in a group of 10 healthy volunteers studied over two fMRI sessions. Linear combinations of sine-cosine pairs at the BH task frequency and its successive harmonics were added sequentially in a nested models approach, and were compared in terms of the adjusted coefficient of determination and corresponding variance explained (VE) of the BOLD signal, as well as the number of voxels exhibiting significant BOLD responses, the estimated CVR values, and their test-retest reproducibility. The brain average VE increased significantly with the Fourier model order, up to the 3rd order. However, the number of responsive voxels increased significantly only up to the 2nd order, and started to decrease from the 3rd order onwards. Moreover, no significant relative underestimation of CVR values was observed beyond the 2nd order. Hence, the 2nd order model was concluded to be the optimal choice for the studied paradigm. This model also yielded the best test-retest reproducibility results, with intra-subject coefficients of variation of 12 and 16% and an intra-class correlation coefficient of 0.74. In conclusion, our results indicate that a Fourier series set consisting of a sine-cosine pair at the BH task frequency and its two harmonics is a suitable model for BOLD-fMRI CVR measurements

  16. Analytic model for low-frequency noise in nanorod devices.

    PubMed

    Lee, Jungil; Yu, Byung Yong; Han, Ilki; Choi, Kyoung Jin; Ghibaudo, Gerard

    2008-10-01

    In this work analytic model for generation of excess low-frequency noise in nanorod devices such as field-effect transistors are developed. In back-gate field-effect transistors where most of the surface area of the nanorod is exposed to the ambient, the surface states could be the major noise source via random walk of electrons for the low-frequency or 1/f noise. In dual gate transistors, the interface states and oxide traps can compete with each other as the main noise source via random walk and tunneling, respectively.

  17. Dynamics of Atmospheric Boundary Layers: Large-Eddy Simulations and Reduced Analytical Models

    NASA Astrophysics Data System (ADS)

    Momen, Mostafa

    Real-world atmospheric and oceanic boundary layers (ABL) involve many inherent complexities, the understanding and modeling of which manifestly exceeds our current capabilities. Previous studies largely focused on the "textbook ABL", which is (quasi) steady and barotropic. However, it is evident that the "real-world ABL", even over flat terrain, rarely meets such simplifying assumptions. The present thesis aims to illustrate and model four complicating features of ABLs that have been overlooked thus far despite their ubiquity: 1) unsteady pressure gradients in neutral ABLs (Chapters 2 and 3), 2) interacting effects of unsteady pressure gradients and static stability in diabatic ABLs (Chapter 4), 3) time-variable buoyancy fluxes (Chapter 5) , and 4) impacts of baroclinicity in neutral and diabatic ABLs (Chapter 6). State-of-the-art large-eddy simulations will be used as a tool to explain the underlying physics and to validate analytical models we develop for these features. Chapter 2 focuses on the turbulence equilibrium: when the forcing time scale is comparable to the turbulence time scale, the turbulence is shown to be out of equilibrium, and the velocity profiles depart from the log-law; However, for longer, and surprisingly for shorter forcing times, quasi-equilibrium is maintained. In Chapter 3, a reduced analytical model, based on the Navier-Stokes equations, will be introduced and shown to be analogous to a damped oscillator where inertial, Coriolis, and friction forces mirror the mass, spring, and damper, respectively. When a steady buoyancy (stable or unstable) is superposed on the unsteady pressure gradient, the same model structure can be maintained, but the damping term, corresponding to friction forces and vertical coupling, needs to account for stability. However, for the reverse case with variable buoyancy flux and stability, the model needs to be extended to allow time-variable damper coefficient. These extensions of the analytical model are

  18. Analytical model of cracking due to rebar corrosion expansion in concrete considering the structure internal force

    NASA Astrophysics Data System (ADS)

    Lin, Xiangyue; Peng, Minli; Lei, Fengming; Tan, Jiangxian; Shi, Huacheng

    2017-12-01

    Based on the assumptions of uniform corrosion and linear elastic expansion, an analytical model of cracking due to rebar corrosion expansion in concrete was established, which is able to consider the structure internal force. And then, by means of the complex variable function theory and series expansion technology established by Muskhelishvili, the corresponding stress component functions of concrete around the reinforcement were obtained. Also, a comparative analysis was conducted between the numerical simulation model and present model in this paper. The results show that the calculation results of both methods were consistent with each other, and the numerical deviation was less than 10%, proving that the analytical model established in this paper is reliable.

  19. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  20. An investigation of helicopter dynamic coupling using an analytical model

    NASA Technical Reports Server (NTRS)

    Keller, Jeffrey D.

    1995-01-01

    Many attempts have been made in recent years to predict the off-axis response of a helicopter to control inputs, and most have had little success. Since physical insight is limited by the complexity of numerical simulation models, this paper examines the off-axis response problem using an analytical model, with the goal of understanding the mechanics of the coupling. A new induced velocity model is extended to include the effects of wake distortion from pitch rate. It is shown that the inclusion of these results in a significant change in the lateral flap response to a steady pitch rate. The proposed inflow model is coupled with the full rotor/body dynamics, and comparisons are made between the model and flight test data for a UH-60 in hover. Results show that inclusion of induced velocity variations due to shaft rate improves correlation in the pitch response to lateral cycle inputs.