The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1993-01-01
Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.
Mixed Initiative Visual Analytics Using Task-Driven Recommendations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kristin A.; Cramer, Nicholas O.; Israel, David
2015-12-07
Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support tasks involved in discovery and sensemaking, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems, at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with such analytic models, such as inferring data models from user interactions to steer the underlying modelsmore » of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Researchers studying the sensemaking process have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present a candidate set of design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences on user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.« less
Slushy weightings for the optimal pilot model. [considering visual tracking task
NASA Technical Reports Server (NTRS)
Dillow, J. D.; Picha, D. G.; Anderson, R. O.
1975-01-01
A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.
Xiao, Cao; Choi, Edward; Sun, Jimeng
2018-06-08
To conduct a systematic review of deep learning models for electronic health record (EHR) data, and illustrate various deep learning architectures for analyzing different data sources and their target applications. We also highlight ongoing research and identify open challenges in building deep learning models of EHRs. We searched PubMed and Google Scholar for papers on deep learning studies using EHR data published between January 1, 2010, and January 31, 2018. We summarize them according to these axes: types of analytics tasks, types of deep learning model architectures, special challenges arising from health data and tasks and their potential solutions, as well as evaluation strategies. We surveyed and analyzed multiple aspects of the 98 articles we found and identified the following analytics tasks: disease detection/classification, sequential prediction of clinical events, concept embedding, data augmentation, and EHR data privacy. We then studied how deep architectures were applied to these tasks. We also discussed some special challenges arising from modeling EHR data and reviewed a few popular approaches. Finally, we summarized how performance evaluations were conducted for each task. Despite the early success in using deep learning for health analytics applications, there still exist a number of issues to be addressed. We discuss them in detail including data and label availability, the interpretability and transparency of the model, and ease of deployment.
Tracy, J I; Pinsk, M; Helverson, J; Urban, G; Dietz, T; Smith, D J
2001-08-01
The link between automatic and effortful processing and nonanalytic and analytic category learning was evaluated in a sample of 29 college undergraduates using declarative memory, semantic category search, and pseudoword categorization tasks. Automatic and effortful processing measures were hypothesized to be associated with nonanalytic and analytic categorization, respectively. Results suggested that contrary to prediction strong criterion-attribute (analytic) responding on the pseudoword categorization task was associated with strong automatic, implicit memory encoding of frequency-of-occurrence information. Data are discussed in terms of the possibility that criterion-attribute category knowledge, once established, may be expressed with few attentional resources. The data indicate that attention resource requirements, even for the same stimuli and task, vary depending on the category rule system utilized. Also, the automaticity emerging from familiarity with analytic category exemplars is very different from the automaticity arising from extensive practice on a semantic category search task. The data do not support any simple mapping of analytic and nonanalytic forms of category learning onto the automatic and effortful processing dichotomy and challenge simple models of brain asymmetries for such procedures. Copyright 2001 Academic Press.
ERIC Educational Resources Information Center
Jamieson, Joan; Poonpon, Kornwipa
2013-01-01
Research and development of a new type of scoring rubric for the integrated speaking tasks of "TOEFL iBT"® are described. These "analytic rating guides" could be helpful if tasks modeled after those in TOEFL iBT were used for formative assessment, a purpose which is different from TOEFL iBT's primary use for admission…
NASA Technical Reports Server (NTRS)
Degani, Asaf; Mitchell, Christine M.; Chappell, Alan R.; Shafto, Mike (Technical Monitor)
1995-01-01
Task-analytic models structure essential information about operator interaction with complex systems, in this case pilot interaction with the autoflight system. Such models serve two purposes: (1) they allow researchers and practitioners to understand pilots' actions; and (2) they provide a compact, computational representation needed to design 'intelligent' aids, e.g., displays, assistants, and training systems. This paper demonstrates the use of the operator function model to trace the process of mode engagements while a pilot is controlling an aircraft via the, autoflight system. The operator function model is a normative and nondeterministic model of how a well-trained, well-motivated operator manages multiple concurrent activities for effective real-time control. For each function, the model links the pilot's actions with the required information. Using the operator function model, this paper describes several mode engagement scenarios. These scenarios were observed and documented during a field study that focused on mode engagements and mode transitions during normal line operations. Data including time, ATC clearances, altitude, system states, and active modes and sub-modes, engagement of modes, were recorded during sixty-six flights. Using these data, seven prototypical mode engagement scenarios were extracted. One scenario details the decision of the crew to disengage a fully automatic mode in favor of a semi-automatic mode, and the consequences of this action. Another describes a mode error involving updating aircraft speed following the engagement of a speed submode. Other scenarios detail mode confusion at various phases of the flight. This analysis uses the operator function model to identify three aspects of mode engagement: (1) the progress of pilot-aircraft-autoflight system interaction; (2) control/display information required to perform mode management activities; and (3) the potential cause(s) of mode confusion. The goal of this paper is twofold: (1) to demonstrate the use of the operator functio model methodology to describe pilot-system interaction while engaging modes And monitoring the system, and (2) to initiate a discussion of how task-analytic models might inform design processes. While the operator function model is only one type of task-analytic representation, the hypothesis of this paper is that some type of task analytic structure is a prerequisite for the design of effective human-automation interaction.
Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng
2014-01-01
Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. PMID:24370496
Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng
2014-04-01
Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. Copyright © 2013 Elsevier Inc. All rights reserved.
Study of helicopterroll control effectiveness criteria
NASA Technical Reports Server (NTRS)
Heffley, Robert K.; Bourne, Simon M.; Curtiss, Howard C., Jr.; Hindson, William S.; Hess, Ronald A.
1986-01-01
A study of helicopter roll control effectiveness based on closed-loop task performance measurement and modeling is presented. Roll control critieria are based on task margin, the excess of vehicle task performance capability over the pilot's task performance demand. Appropriate helicopter roll axis dynamic models are defined for use with analytic models for task performance. Both near-earth and up-and-away large-amplitude maneuvering phases are considered. The results of in-flight and moving-base simulation measurements are presented to support the roll control effectiveness criteria offered. This Volume contains the theoretical analysis, simulation results and criteria development.
Automation effects in a multiloop manual control system
NASA Technical Reports Server (NTRS)
Hess, R. A.; Mcnally, B. D.
1986-01-01
An experimental and analytical study was undertaken to investigate human interaction with a simple multiloop manual control system in which the human's activity was systematically varied by changing the level of automation. The system simulated was the longitudinal dynamics of a hovering helicopter. The automation-systems-stabilized vehicle responses from attitude to velocity to position and also provided for display automation in the form of a flight director. The control-loop structure resulting from the task definition can be considered a simple stereotype of a hierarchical control system. The experimental study was complemented by an analytical modeling effort which utilized simple crossover models of the human operator. It was shown that such models can be extended to the description of multiloop tasks involving preview and precognitive human operator behavior. The existence of time optimal manual control behavior was established for these tasks and the role which internal models may play in establishing human-machine performance was discussed.
NASA Technical Reports Server (NTRS)
Phatak, A. V.
1980-01-01
A systematic analytical approach to the determination of helicopter IFR precision approach requirements is formulated. The approach is based upon the hypothesis that pilot acceptance level or opinion rating of a given system is inversely related to the degree of pilot involvement in the control task. A nonlinear simulation of the helicopter approach to landing task incorporating appropriate models for UH-1H aircraft, the environmental disturbances and the human pilot was developed as a tool for evaluating the pilot acceptance hypothesis. The simulated pilot model is generic in nature and includes analytical representation of the human information acquisition, processing, and control strategies. Simulation analyses in the flight director mode indicate that the pilot model used is reasonable. Results of the simulation are used to identify candidate pilot workload metrics and to test the well known performance-work-load relationship. A pilot acceptance analytical methodology is formulated as a basis for further investigation, development and validation.
ERIC Educational Resources Information Center
Zhang, Zhidong
2016-01-01
This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…
Introduction to the IWA task group on biofilm modeling.
Noguera, D R; Morgenroth, E
2004-01-01
An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.
NASA Technical Reports Server (NTRS)
Hess, R. A.
1976-01-01
Paramount to proper utilization of electronic displays is a method for determining pilot-centered display requirements. Display design should be viewed fundamentally as a guidance and control problem which has interactions with the designer's knowledge of human psychomotor activity. From this standpoint, reliable analytical models of human pilots as information processors and controllers can provide valuable insight into the display design process. A relatively straightforward, nearly algorithmic procedure for deriving model-based, pilot-centered display requirements was developed and is presented. The optimal or control theoretic pilot model serves as the backbone of the design methodology, which is specifically directed toward the synthesis of head-down, electronic, cockpit display formats. Some novel applications of the optimal pilot model are discussed. An analytical design example is offered which defines a format for the electronic display to be used in a UH-1H helicopter in a landing approach task involving longitudinal and lateral degrees of freedom.
Assessing Measurement Invariance for Spanish Sentence Repetition and Morphology Elicitation Tasks.
Kapantzoglou, Maria; Thompson, Marilyn S; Gray, Shelley; Restrepo, M Adelaida
2016-04-01
The purpose of this study was to evaluate evidence supporting the construct validity of two grammatical tasks (sentence repetition, morphology elicitation) included in the Spanish Screener for Language Impairment in Children (Restrepo, Gorin, & Gray, 2013). We evaluated if the tasks measured the targeted grammatical skills in the same way across predominantly Spanish-speaking children with typical language development and those with primary language impairment. A multiple-group, confirmatory factor analytic approach was applied to examine factorial invariance in a sample of 307 predominantly Spanish-speaking children (177 with typical language development; 130 with primary language impairment). The 2 newly developed grammatical tasks were modeled as measures in a unidimensional confirmatory factor analytic model along with 3 well-established grammatical measures from the Clinical Evaluation of Language Fundamentals-Fourth Edition, Spanish (Wiig, Semel, & Secord, 2006). Results suggest that both new tasks measured the construct of grammatical skills for both language-ability groups in an equivalent manner. There was no evidence of bias related to children's language status for the Spanish Screener for Language Impairment in Children Sentence Repetition or Morphology Elicitation tasks. Results provide support for the validity of the new tasks as measures of grammatical skills.
Fent, Kenneth W.; Gaines, Linda G. Trelles; Thomasen, Jennifer M.; Flack, Sheila L.; Ding, Kai; Herring, Amy H.; Whittaker, Stephen G.; Nylander-French, Leena A.
2009-01-01
We conducted a repeated exposure-assessment survey for task-based breathing-zone concentrations (BZCs) of monomeric and polymeric 1,6-hexamethylene diisocyanate (HDI) during spray painting on 47 automotive spray painters from North Carolina and Washington State. We report here the use of linear mixed modeling to identify the primary determinants of the measured BZCs. Both one-stage (N = 98 paint tasks) and two-stage (N = 198 paint tasks) filter sampling was used to measure concentrations of HDI, uretidone, biuret, and isocyanurate. The geometric mean (GM) level of isocyanurate (1410 μg m−3) was higher than all other analytes (i.e. GM < 7.85 μg m−3). The mixed models were unique to each analyte and included factors such as analyte-specific paint concentration, airflow in the paint booth, and sampler type. The effect of sampler type was corroborated by side-by-side one- and two-stage personal air sampling (N = 16 paint tasks). According to paired t-tests, significantly higher concentrations of HDI (P = 0.0363) and isocyanurate (P = 0.0035) were measured using one-stage samplers. Marginal R2 statistics were calculated for each model; significant fixed effects were able to describe 25, 52, 54, and 20% of the variability in BZCs of HDI, uretidone, biuret, and isocyanurate, respectively. Mixed models developed in this study characterize the processes governing individual polyisocyanate BZCs. In addition, the mixed models identify ways to reduce polyisocyanate BZCs and, hence, protect painters from potential adverse health effects. PMID:19622637
NASA Technical Reports Server (NTRS)
Hess, Ronald A.
1999-01-01
This paper presents an analytical and experimental methodology for studying flight simulator fidelity. The task was a rotorcraft bob-up/down maneuver in which vertical acceleration constituted the motion cue. The task considered here is aside-step maneuver that differs from the bob-up one important way: both roll and lateral acceleration cues are available to the pilot. It has been communicated to the author that in some Verticle Motion Simulator (VMS) studies, the lateral acceleration cue has been found to be the most important. It is of some interest to hypothesize how this motion cue associated with "outer-loop" lateral translation fits into the modeling procedure where only "inner-loop " motion cues were considered. This Note is an attempt at formulating such an hypothesis and analytically comparing a large-motion simulator, e.g., the VMS, with a small-motion simulator, e.g., a hexapod.
Load sharing in distributed real-time systems with state-change broadcasts
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Chang, Yi-Chieh
1989-01-01
A decentralized dynamic load-sharing (LS) method based on state-change broadcasts is proposed for a distributed real-time system. Whenever the state of a node changes from underloaded to fully loaded and vice versa, the node broadcasts this change to a set of nodes, called a buddy set, in the system. The performance of the method is evaluated with both analytic modeling and simulation. It is modeled first by an embedded Markov chain for which numerical solutions are derived. The model solutions are then used to calculate the distribution of queue lengths at the nodes and the probability of meeting task deadlines. The analytical results show that buddy sets of 10 nodes outperform those of less than 10 nodes, and the incremental benefit gained from increasing the buddy set size beyond 15 nodes is insignificant. These and other analytical results are verified by simulation. The proposed LS method is shown to meet task deadlines with a very high probability.
NASA Technical Reports Server (NTRS)
Hess, R. A.
1977-01-01
A brief review of some of the more pertinent applications of analytical pilot models to the prediction of aircraft handling qualities is undertaken. The relative ease with which multiloop piloting tasks can be modeled via the optimal control formulation makes the use of optimal pilot models particularly attractive for handling qualities research. To this end, a rating hypothesis is introduced which relates the numerical pilot opinion rating assigned to a particular vehicle and task to the numerical value of the index of performance resulting from an optimal pilot modeling procedure as applied to that vehicle and task. This hypothesis is tested using data from piloted simulations and is shown to be reasonable. An example concerning a helicopter landing approach is introduced to outline the predictive capability of the rating hypothesis in multiaxis piloting tasks.
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.
fMRI activation patterns in an analytic reasoning task: consistency with EEG source localization
NASA Astrophysics Data System (ADS)
Li, Bian; Vasanta, Kalyana C.; O'Boyle, Michael; Baker, Mary C.; Nutter, Brian; Mitra, Sunanda
2010-03-01
Functional magnetic resonance imaging (fMRI) is used to model brain activation patterns associated with various perceptual and cognitive processes as reflected by the hemodynamic (BOLD) response. While many sensory and motor tasks are associated with relatively simple activation patterns in localized regions, higher-order cognitive tasks may produce activity in many different brain areas involving complex neural circuitry. We applied a recently proposed probabilistic independent component analysis technique (PICA) to determine the true dimensionality of the fMRI data and used EEG localization to identify the common activated patterns (mapped as Brodmann areas) associated with a complex cognitive task like analytic reasoning. Our preliminary study suggests that a hybrid GLM/PICA analysis may reveal additional regions of activation (beyond simple GLM) that are consistent with electroencephalography (EEG) source localization patterns.
Foster, Scott D; Feutry, Pierre; Grewe, Peter M; Berry, Oliver; Hui, Francis K C; Davies, Campbell R
2018-06-26
Delineating naturally occurring and self-sustaining sub-populations (stocks) of a species is an important task, especially for species harvested from the wild. Despite its central importance to natural resource management, analytical methods used to delineate stocks are often, and increasingly, borrowed from superficially similar analytical tasks in human genetics even though models specifically for stock identification have been previously developed. Unfortunately, the analytical tasks in resource management and human genetics are not identical { questions about humans are typically aimed at inferring ancestry (often referred to as 'admixture') rather than breeding stocks. In this article, we argue, and show through simulation experiments and an analysis of yellowfin tuna data, that ancestral analysis methods are not always appropriate for stock delineation. In this work, we advocate a variant of a previouslyintroduced and simpler model that identifies stocks directly. We also highlight that the computational aspects of the analysis, irrespective of the model, are difficult. We introduce some alternative computational methods and quantitatively compare these methods to each other and to established methods. We also present a method for quantifying uncertainty in model parameters and in assignment probabilities. In doing so, we demonstrate that point estimates can be misleading. One of the computational strategies presented here, based on an expectation-maximisation algorithm with judiciously chosen starting values, is robust and has a modest computational cost. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
A methodology for the assessment of manned flight simulator fidelity
NASA Technical Reports Server (NTRS)
Hess, Ronald A.; Malsbury, Terry N.
1989-01-01
A relatively simple analytical methodology for assessing the fidelity of manned flight simulators for specific vehicles and tasks is offered. The methodology is based upon an application of a structural model of the human pilot, including motion cue effects. In particular, predicted pilot/vehicle dynamic characteristics are obtained with and without simulator limitations. A procedure for selecting model parameters can be implemented, given a probable pilot control strategy. In analyzing a pair of piloting tasks for which flight and simulation data are available, the methodology correctly predicted the existence of simulator fidelity problems. The methodology permitted the analytical evaluation of a change in simulator characteristics and indicated that a major source of the fidelity problems was a visual time delay in the simulation.
Modelling of non-equilibrium flow in the branched pipeline systems
NASA Astrophysics Data System (ADS)
Sumskoi, S. I.; Sverchkov, A. M.; Lisanov, M. V.; Egorov, A. F.
2016-09-01
This article presents a mathematical model and a numerical method for solving the task of water hammer in the branched pipeline system. The task is considered in the onedimensional non-stationary formulation taking into account the realities such as the change in the diameter of the pipeline and its branches. By comparison with the existing analytic solution it has been shown that the proposed method possesses good accuracy. With the help of the developed model and numerical method the task has been solved concerning the transmission of the compression waves complex in the branching pipeline system when several shut down valves operate. It should be noted that the offered model and method may be easily introduced to a number of other tasks, for example, to describe the flow of blood in the vessels.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
The heuristic-analytic theory of reasoning: extension and evaluation.
Evans, Jonathan St B T
2006-06-01
An extensively revised heuristic-analytic theory of reasoning is presented incorporating three principles of hypothetical thinking. The theory assumes that reasoning and judgment are facilitated by the formation of epistemic mental models that are generated one at a time (singularity principle) by preconscious heuristic processes that contextualize problems in such a way as to maximize relevance to current goals (relevance principle). Analytic processes evaluate these models but tend to accept them unless there is good reason to reject them (satisficing principle). At a minimum, analytic processing of models is required so as to generate inferences or judgments relevant to the task instructions, but more active intervention may result in modification or replacement of default models generated by the heuristic system. Evidence for this theory is provided by a review of a wide range of literature on thinking and reasoning.
Incorporation of RAM techniques into simulation modeling
NASA Astrophysics Data System (ADS)
Nelson, S. C., Jr.; Haire, M. J.; Schryver, J. C.
1995-01-01
This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model to represent the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army's next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through 'what if' questions, sensitivity studies, and battle scenario changes.
Human performance modeling for system of systems analytics: combat performance-shaping factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawton, Craig R.; Miller, Dwight Peter
The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate theymore » would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.« less
A Petri-net coordination model for an intelligent mobile robot
NASA Technical Reports Server (NTRS)
Wang, F.-Y.; Kyriakopoulos, K. J.; Tsolkas, A.; Saridis, G. N.
1990-01-01
The authors present a Petri net model of the coordination level of an intelligent mobile robot system (IMRS). The purpose of this model is to specify the integration of the individual efforts on path planning, supervisory motion control, and vision systems that are necessary for the autonomous operation of the mobile robot in a structured dynamic environment. This is achieved by analytically modeling the various units of the system as Petri net transducers and explicitly representing the task precedence and information dependence among them. The model can also be used to simulate the task processing and to evaluate the efficiency of operations and the responsibility of decisions in the coordination level of the IMRS. Some simulation results on the task processing and learning are presented.
Effect(s) of Language Tasks on Severity of Disfluencies in Preschool Children with Stuttering
ERIC Educational Resources Information Center
Zamani, Peyman; Ravanbakhsh, Majid; Weisi, Farzad; Rashedi, Vahid; Naderi, Sara; Hosseinzadeh, Ayub; Rezaei, Mohammad
2017-01-01
Speech disfluency in children can be increased or decreased depending on the type of linguistic task presented to them. In this study, the effect of sentence imitation and sentence modeling on severity of speech disfluencies in preschool children with stuttering is investigated. In this cross-sectional descriptive analytical study, 58 children…
The Case for Adopting Server-side Analytics
NASA Astrophysics Data System (ADS)
Tino, C.; Holmes, C. P.; Feigelson, E.; Hurlburt, N. E.
2017-12-01
The standard method for accessing Earth and space science data relies on a scheme developed decades ago: data residing in one or many data stores must be parsed out and shipped via internet lines or physical transport to the researcher who in turn locally stores the data for analysis. The analyses tasks are varied and include visualization, parameterization, and comparison with or assimilation into physics models. In many cases this process is inefficient and unwieldy as the data sets become larger and demands on the analysis tasks become more sophisticated and complex. For about a decade, several groups have explored a new paradigm to this model. The names applied to the paradigm include "data analytics", "climate analytics", and "server-side analytics". The general concept is that in close network proximity to the data store there will be a tailored processing capability appropriate to the type and use of the data served. The user of the server-side analytics will operate on the data with numerical procedures. The procedures can be accessed via canned code, a scripting processor, or an analysis package such as Matlab, IDL or R. Results of the analytics processes will then be relayed via the internet to the user. In practice, these results will be at a much lower volume, easier to transport to and store locally by the user and easier for the user to interoperate with data sets from other remote data stores. The user can also iterate on the processing call to tailor the results as needed. A major component of server-side analytics could be to provide sets of tailored results to end users in order to eliminate the repetitive preconditioning that is both often required with these data sets and which drives much of the throughput challenges. NASA's Big Data Task Force studied this issue. This paper will present the results of this study including examples of SSAs that are being developed and demonstrated and suggestions for architectures that might be developed for future applications.
Holistic rubric vs. analytic rubric for measuring clinical performance levels in medical students.
Yune, So Jung; Lee, Sang Yeoup; Im, Sun Ju; Kam, Bee Sung; Baek, Sun Yong
2018-06-05
Task-specific checklists, holistic rubrics, and analytic rubrics are often used for performance assessments. We examined what factors evaluators consider important in holistic scoring of clinical performance assessment, and compared the usefulness of applying holistic and analytic rubrics respectively, and analytic rubrics in addition to task-specific checklists based on traditional standards. We compared the usefulness of a holistic rubric versus an analytic rubric in effectively measuring the clinical skill performances of 126 third-year medical students who participated in a clinical performance assessment conducted by Pusan National University School of Medicine. We conducted a questionnaire survey of 37 evaluators who used all three evaluation methods-holistic rubric, analytic rubric, and task-specific checklist-for each student. The relationship between the scores on the three evaluation methods was analyzed using Pearson's correlation. Inter-rater agreement was analyzed by Kappa index. The effect of holistic and analytic rubric scores on the task-specific checklist score was analyzed using multiple regression analysis. Evaluators perceived accuracy and proficiency to be major factors in objective structured clinical examinations evaluation, and history taking and physical examination to be major factors in clinical performance examinations evaluation. Holistic rubric scores were highly related to the scores of the task-specific checklist and analytic rubric. Relatively low agreement was found in clinical performance examinations compared to objective structured clinical examinations. Meanwhile, the holistic and analytic rubric scores explained 59.1% of the task-specific checklist score in objective structured clinical examinations and 51.6% in clinical performance examinations. The results show the usefulness of holistic and analytic rubrics in clinical performance assessment, which can be used in conjunction with task-specific checklists for more efficient evaluation.
Student Writing Accepted as High-Quality Responses to Analytic Text-Based Writing Tasks
ERIC Educational Resources Information Center
Wang, Elaine; Matsumura, Lindsay Clare; Correnti, Richard
2018-01-01
Literacy standards increasingly emphasize the importance of analytic text-based writing. Little consensus exists, however, around what high-quality student responses should look like in this genre. In this study, we investigated fifth-grade students' writing in response to analytic text-based writing tasks (15 teachers, 44 writing tasks, 88 pieces…
Kokis, Judite V; Macpherson, Robyn; Toplak, Maggie E; West, Richard F; Stanovich, Keith E
2002-09-01
Developmental and individual differences in the tendency to favor analytic responses over heuristic responses were examined in children of two different ages (10- and 11-year-olds versus 13-year-olds), and of widely varying cognitive ability. Three tasks were examined that all required analytic processing to override heuristic processing: inductive reasoning, deductive reasoning under conditions of belief bias, and probabilistic reasoning. Significant increases in analytic responding with development were observed on the first two tasks. Cognitive ability was associated with analytic responding on all three tasks. Cognitive style measures such as actively open-minded thinking and need for cognition explained variance in analytic responding on the tasks after variance shared with cognitive ability had been controlled. The implications for dual-process theories of cognition and cognitive development are discussed.
The Ophidia Stack: Toward Large Scale, Big Data Analytics Experiments for Climate Change
NASA Astrophysics Data System (ADS)
Fiore, S.; Williams, D. N.; D'Anca, A.; Nassisi, P.; Aloisio, G.
2015-12-01
The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in multiple domains (e.g. climate change). It provides a "datacube-oriented" framework responsible for atomically processing and manipulating scientific datasets, by providing a common way to run distributive tasks on large set of data fragments (chunks). Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes. The project relies on a strong background on high performance database management and On-Line Analytical Processing (OLAP) systems to manage large scientific datasets. The Ophidia analytics platform provides several data operators to manipulate datacubes (about 50), and array-based primitives (more than 100) to perform data analysis on large scientific data arrays. To address interoperability, Ophidia provides multiple server interfaces (e.g. OGC-WPS). From a client standpoint, a Python interface enables the exploitation of the framework into Python-based eco-systems/applications (e.g. IPython) and the straightforward adoption of a strong set of related libraries (e.g. SciPy, NumPy). The talk will highlight a key feature of the Ophidia framework stack: the "Analytics Workflow Management System" (AWfMS). The Ophidia AWfMS coordinates, orchestrates, optimises and monitors the execution of multiple scientific data analytics and visualization tasks, thus supporting "complex analytics experiments". Some real use cases related to the CMIP5 experiment will be discussed. In particular, with regard to the "Climate models intercomparison data analysis" case study proposed in the EU H2020 INDIGO-DataCloud project, workflows related to (i) anomalies, (ii) trend, and (iii) climate change signal analysis will be presented. Such workflows will be distributed across multiple sites - according to the datasets distribution - and will include intercomparison, ensemble, and outlier analysis. The two-level workflow solution envisioned in INDIGO (coarse grain for distributed tasks orchestration, and fine grain, at the level of a single data analytics cluster instance) will be presented and discussed.
Perspectives on bioanalytical mass spectrometry and automation in drug discovery.
Janiszewski, John S; Liston, Theodore E; Cole, Mark J
2008-11-01
The use of high speed synthesis technologies has resulted in a steady increase in the number of new chemical entities active in the drug discovery research stream. Large organizations can have thousands of chemical entities in various stages of testing and evaluation across numerous projects on a weekly basis. Qualitative and quantitative measurements made using LC/MS are integrated throughout this process from early stage lead generation through candidate nomination. Nearly all analytical processes and procedures in modern research organizations are automated to some degree. This includes both hardware and software automation. In this review we discuss bioanalytical mass spectrometry and automation as components of the analytical chemistry infrastructure in pharma. Analytical chemists are presented as members of distinct groups with similar skillsets that build automated systems, manage test compounds, assays and reagents, and deliver data to project teams. The ADME-screening process in drug discovery is used as a model to highlight the relationships between analytical tasks in drug discovery. Emerging software and process automation tools are described that can potentially address gaps and link analytical chemistry related tasks. The role of analytical chemists and groups in modern 'industrialized' drug discovery is also discussed.
Solving the optimal attention allocation problem in manual control
NASA Technical Reports Server (NTRS)
Kleinman, D. L.
1976-01-01
Within the context of the optimal control model of human response, analytic expressions for the gradients of closed-loop performance metrics with respect to human operator attention allocation are derived. These derivatives serve as the basis for a gradient algorithm that determines the optimal attention that a human should allocate among several display indicators in a steady-state manual control task. Application of the human modeling techniques are made to study the hover control task for a CH-46 VTOL flight tested by NASA.
Should metacognition be measured by logistic regression?
Rausch, Manuel; Zehetleitner, Michael
2017-03-01
Are logistic regression slopes suitable to quantify metacognitive sensitivity, i.e. the efficiency with which subjective reports differentiate between correct and incorrect task responses? We analytically show that logistic regression slopes are independent from rating criteria in one specific model of metacognition, which assumes (i) that rating decisions are based on sensory evidence generated independently of the sensory evidence used for primary task responses and (ii) that the distributions of evidence are logistic. Given a hierarchical model of metacognition, logistic regression slopes depend on rating criteria. According to all considered models, regression slopes depend on the primary task criterion. A reanalysis of previous data revealed that massive numbers of trials are required to distinguish between hierarchical and independent models with tolerable accuracy. It is argued that researchers who wish to use logistic regression as measure of metacognitive sensitivity need to control the primary task criterion and rating criteria. Copyright © 2017 Elsevier Inc. All rights reserved.
An Integrative Wave Model for the Marginal Ice Zone Based on a Rheological Parameterization
2015-09-30
2015) Characterizing the behavior of gravity wave propagation into a floating or submerged viscous layer , 2015 AGU Joint Assembly Meeting, May 3–7...are the PI and a PhD student. Task 1: Use an analytical method to determine the propagation of waves through a floating viscoelastic mat for a wide...and Ben Holt. 2 Task 3: Assemble all existing laboratory and field data of wave propagation in ice covers. Task 4: Determine if all existing
Xie, Weizhen; Cappiello, Marcus; Meng, Ming; Rosenthal, Robert; Zhang, Weiwei
2018-05-08
This meta-analytical review examines whether a deletion variant in ADRA2B, a gene that encodes α 2B adrenoceptor in the regulation of norepinephrine availability, influences cognitive processing of emotional information in human observers. Using a multilevel modeling approach, this meta-analysis of 16 published studies with a total of 2,752 participants showed that ADRA2B deletion variant was significantly associated with enhanced perceptual and cognitive task performance for emotional stimuli. In contrast, this genetic effect did not manifest in overall task performance when non-emotional content was used. Furthermore, various study-level factors, such as targeted cognitive processes (memory vs. attention/perception) and task procedures (recall vs. recognition), could moderate the size of this genetic effect. Overall, with increased statistical power and standardized analytical procedures, this meta-analysis has established the contributions of ADRA2B to the interactions between emotion and cognition, adding to the growing literature on individual differences in attention, perception, and memory for emotional information in the general population. Copyright © 2018 Elsevier Ltd. All rights reserved.
Twelfth Annual Conference on Manual Control
NASA Technical Reports Server (NTRS)
Wempe, T. E.
1976-01-01
Main topics discussed cover multi-task decision making, attention allocation and workload measurement, displays and controls, nonvisual displays, tracking and other psychomotor tasks, automobile driving, handling qualities and pilot ratings, remote manipulation, system identification, control models, and motion and visual cues. Sixty-five papers are included with presentations on results of analytical studies to develop and evaluate human operator models for a range of control task, vehicle dynamics and display situations; results of tests of physiological control systems and applications to medical problems; and on results of simulator and flight tests to determine display, control and dynamics effects on operator performance and workload for aircraft, automobile, and remote control systems.
The use of decision analysis to examine ethical decision making by critical care nurses.
Hughes, K K; Dvorak, E M
1997-01-01
To examine the extent to which critical care staff nurses make ethical decisions that coincide with those recommended by a decision analytic model. Nonexperimental, ex post facto. Midwestern university-affiliated 500 bed tertiary care medical center. One hundred critical care staff nurses randomly selected from seven critical care units. Complete responses were obtained from 82 nurses (for a final response rate of 82%). The dependent variable--consistent decision making--was measured as staff nurses' abilities to make ethical decisions that coincided with those prescribed by the decision model. Subjects completed two instruments, the Ethical Decision Analytic Model, a computer-administered instrument designed to measure staff nurses' abilities to make consistent decisions about a chemically-impaired colleague; and a Background Inventory. The results indicate marked consensus among nurses when informal methods were used. However, there was little consistency between the nurses' informal decisions and those recommended by the decision analytic model. Although 50% (n = 41) of all nurses chose a course of action that coincided with the model's least optimal alternative, few nurses agreed with the model as to the most optimal course of action. The findings also suggest that consistency was unrelated (p > 0.05) to the nurses' educational background or years of clinical experience; that most subjects reported receiving little or no education in decision making during their basic nursing education programs; but that exposure to decision-making strategies was related to years of nursing experience (p < 0.05). The findings differ from related studies that have found a moderate degree of consistency between nurses and decision analytic models for strictly clinical decision tasks, especially when those tasks were less complex. However, the findings partially coincide with other findings that decision analysis may not be particularly well-suited to the critical care environment. Additional research is needed to determine whether critical care nurses use the same decision-making methods as do other nurses; and to clarify the effects of decision task (clinical versus ethical) on nurses' decision making. It should not be assumed that methods used to study nurses' clinical decision making are applicable for all nurses or all types of decisions, including ethical decisions.
Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering.
Endert, A; Fiaux, P; North, C
2012-12-01
Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.
Role of optimization in the human dynamics of task execution
NASA Astrophysics Data System (ADS)
Cajueiro, Daniel O.; Maldonado, Wilfredo L.
2008-03-01
In order to explain the empirical evidence that the dynamics of human activity may not be well modeled by Poisson processes, a model based on queuing processes was built in the literature [A. L. Barabasi, Nature (London) 435, 207 (2005)]. The main assumption behind that model is that people execute their tasks based on a protocol that first executes the high priority item. In this context, the purpose of this paper is to analyze the validity of that hypothesis assuming that people are rational agents that make their decisions in order to minimize the cost of keeping nonexecuted tasks on the list. Therefore, we build and analytically solve a dynamic programming model with two priority types of tasks and show that the validity of this hypothesis depends strongly on the structure of the instantaneous costs that a person has to face if a given task is kept on the list for more than one period. Moreover, one interesting finding is that in one of the situations the protocol used to execute the tasks generates complex one-dimensional dynamics.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-18
... during conference calls and via email discussions. Member duties include prioritizing topics, designing... their expertise in methodological issues such as meta-analysis, analytic modeling or clinical...
De Neys, Wim
2006-06-01
Human reasoning has been shown to overly rely on intuitive, heuristic processing instead of a more demanding analytic inference process. Four experiments tested the central claim of current dual-process theories that analytic operations involve time-consuming executive processing whereas the heuristic system would operate automatically. Participants solved conjunction fallacy problems and indicative and deontic selection tasks. Experiment 1 established that making correct analytic inferences demanded more processing time than did making heuristic inferences. Experiment 2 showed that burdening the executive resources with an attention-demanding secondary task decreased correct, analytic responding and boosted the rate of conjunction fallacies and indicative matching card selections. Results were replicated in Experiments 3 and 4 with a different secondary-task procedure. Involvement of executive resources for the deontic selection task was less clear. Findings validate basic processing assumptions of the dual-process framework and complete the correlational research programme of K. E. Stanovich and R. F. West (2000).
Ross, Robert M; Pennycook, Gordon; McKay, Ryan; Gervais, Will M; Langdon, Robyn; Coltheart, Max
2016-07-01
It has been proposed that deluded and delusion-prone individuals gather less evidence before forming beliefs than those who are not deluded or delusion-prone. The primary source of evidence for this "jumping to conclusions" (JTC) bias is provided by research that utilises the "beads task" data-gathering paradigm. However, the cognitive mechanisms subserving data gathering in this task are poorly understood. In the largest published beads task study to date (n = 558), we examined data gathering in the context of influential dual-process theories of reasoning. Analytic cognitive style (the willingness or disposition to critically evaluate outputs from intuitive processing and engage in effortful analytic processing) predicted data gathering in a non-clinical sample, but delusional ideation did not. The relationship between data gathering and analytic cognitive style suggests that dual-process theories of reasoning can contribute to our understanding of the beads task. It is not clear why delusional ideation was not found to be associated with data gathering or analytic cognitive style.
Models of unit operations used for solid-waste processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, G.M.; Glaub, J.C.; Diaz, L.F.
1984-09-01
This report documents the unit operations models that have been developed for typical refuse-derived-fuel (RDF) processing systems. These models, which represent the mass balances, energy requirements, and economics of the unit operations, are derived, where possible, from basic principles. Empiricism has been invoked where a governing theory has yet to be developed. Field test data and manufacturers' information, where available, supplement the analytical development of the models. A literature review has also been included for the purpose of compiling and discussing in one document the available information pertaining to the modeling of front-end unit operations. Separate analytics have been donemore » for each task.« less
Lessons Learned from Deploying an Analytical Task Management Database
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.; Welch, Clara; Arceneaux, Joshua; Bulgatz, Dennis; Hunt, Mitch; Young, Stephen
2007-01-01
Defining requirements, missions, technologies, and concepts for space exploration involves multiple levels of organizations, teams of people with complementary skills, and analytical models and simulations. Analytical activities range from filling a To-Be-Determined (TBD) in a requirement to creating animations and simulations of exploration missions. In a program as large as returning to the Moon, there are hundreds of simultaneous analysis activities. A way to manage and integrate efforts of this magnitude is to deploy a centralized database that provides the capability to define tasks, identify resources, describe products, schedule deliveries, and generate a variety of reports. This paper describes a web-accessible task management system and explains the lessons learned during the development and deployment of the database. Through the database, managers and team leaders can define tasks, establish review schedules, assign teams, link tasks to specific requirements, identify products, and link the task data records to external repositories that contain the products. Data filters and spreadsheet export utilities provide a powerful capability to create custom reports. Import utilities provide a means to populate the database from previously filled form files. Within a four month period, a small team analyzed requirements, developed a prototype, conducted multiple system demonstrations, and deployed a working system supporting hundreds of users across the aeros pace community. Open-source technologies and agile software development techniques, applied by a skilled team enabled this impressive achievement. Topics in the paper cover the web application technologies, agile software development, an overview of the system's functions and features, dealing with increasing scope, and deploying new versions of the system.
Predicting the size of individual and group differences on speeded cognitive tasks.
Chen, Jing; Hale, Sandra; Myerson, Joel
2007-06-01
An a priori test of the difference engine model (Myerson, Hale, Zheng, Jenkins, & Widaman, 2003) was conducted using a large, diverse sample of individuals who performed three speeded verbal tasks and three speeded visuospatial tasks. Results demonstrated that, as predicted by the model, the group standard deviation (SD) on any task was proportional to the amount of processing required by that task. Both individual performances as well as those of fast and slow subgroups could be accurately predicted by the model using no free parameters, just an individual or subgroup's mean z-score and the values of theoretical constructs estimated from fits to the group SDs. Taken together, these results are consistent with post hoc analyses reported by Myerson et al. and provide even stronger supporting evidence. In particular, the ability to make quantitative predictions without using any free parameters provides the clearest demonstration to date of the power of an analytic approach on the basis of the difference engine.
Multiloop Manual Control of Dynamic Systems
NASA Technical Reports Server (NTRS)
Hess, R. A.; Mcnally, B. D.
1984-01-01
Human interaction with a simple, multiloop dynamic system in which the human's activity was systematically varied by changing the levels of automation was studied. The control loop structure resulting from the task definition parallels that for any multiloop manual control system, is considered a sterotype. Simple models of the human in the task, and upon extending a technique for describing the manner in which the human subjectively quantifies his opinion of task difficulty were developed. A man in the loop simulation which provides data to support and direct the analytical effort is presented.
Issues in Developing a Normative Descriptive Model for Dyadic Decision Making
NASA Technical Reports Server (NTRS)
Serfaty, D.; Kleinman, D. L.
1984-01-01
Most research in modelling human information processing and decision making has been devoted to the case of the single human operator. In the present effort, concepts from the fields of organizational behavior, engineering psychology, team theory and mathematical modelling are merged in an attempt to consider first the case of two cooperating decisionmakers (the Dyad) in a multi-task environment. Rooted in the well-known Dynamic Decision Model (DDM), the normative descriptive approach brings basic cognitive and psychophysical characteristics inherent to human behavior into a team theoretic analytic framework. An experimental paradigm, involving teams in dynamic decision making tasks, is designed to produce the data with which to build the theoretical model.
Dabek, Filip; Caban, Jesus J
2017-01-01
Despite the recent popularity of visual analytics focusing on big data, little is known about how to support users that use visualization techniques to explore multi-dimensional datasets and accomplish specific tasks. Our lack of models that can assist end-users during the data exploration process has made it challenging to learn from the user's interactive and analytical process. The ability to model how a user interacts with a specific visualization technique and what difficulties they face are paramount in supporting individuals with discovering new patterns within their complex datasets. This paper introduces the notion of visualization systems understanding and modeling user interactions with the intent of guiding a user through a task thereby enhancing visual data exploration. The challenges faced and the necessary future steps to take are discussed; and to provide a working example, a grammar-based model is presented that can learn from user interactions, determine the common patterns among a number of subjects using a K-Reversible algorithm, build a set of rules, and apply those rules in the form of suggestions to new users with the goal of guiding them along their visual analytic process. A formal evaluation study with 300 subjects was performed showing that our grammar-based model is effective at capturing the interactive process followed by users and that further research in this area has the potential to positively impact how users interact with a visualization system.
Building analytical three-field cosmological models
NASA Astrophysics Data System (ADS)
Santos, J. R. L.; Moraes, P. H. R. S.; Ferreira, D. A.; Neta, D. C. Vilar
2018-02-01
A difficult task to deal with is the analytical treatment of models composed of three real scalar fields, as their equations of motion are in general coupled and hard to integrate. In order to overcome this problem we introduce a methodology to construct three-field models based on the so-called "extension method". The fundamental idea of the procedure is to combine three one-field systems in a non-trivial way, to construct an effective three scalar field model. An interesting scenario where the method can be implemented is with inflationary models, where the Einstein-Hilbert Lagrangian is coupled with the scalar field Lagrangian. We exemplify how a new model constructed from our method can lead to non-trivial behaviors for cosmological parameters.
Espy, K A; Kaufmann, P M; McDiarmid, M D; Glisky, M L
1999-11-01
The A-not-B (AB) task has been hypothesized to measure executive/frontal lobe function; however, the developmental and measurement characteristics of this task have not been investigated. Performances on AB and comparison tasks adapted from developmental and neuroscience literature was examined in 117 preschool children (ages 23-66 months). Age significantly predicted performance on AB, Delayed Alternation, Spatial Reversal, Color Reversal, and Self-Control tasks. A four-factor analytic model best fit task performance data. AB task indices loaded on two factors with measures from the Self-Control and Delayed Alternation tasks, respectively. AB indices did not load with those from the reversal tasks despite similarities in task administration and presumed cognitive demand (working memory). These results indicate that AB is sensitive to individual differences in age-related performance in preschool children and suggest that AB performance is related to both working memory and inhibition processes in this age range.
Specialized data analysis of SSME and advanced propulsion system vibration measurements
NASA Technical Reports Server (NTRS)
Coffin, Thomas; Swanson, Wayne L.; Jong, Yen-Yi
1993-01-01
The basic objectives of this contract were to perform detailed analysis and evaluation of dynamic data obtained during Space Shuttle Main Engine (SSME) test and flight operations, including analytical/statistical assessment of component dynamic performance, and to continue the development and implementation of analytical/statistical models to effectively define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational conditions. This study was to provide timely assessment of engine component operational status, identify probable causes of malfunction, and define feasible engineering solutions. The work was performed under three broad tasks: (1) Analysis, Evaluation, and Documentation of SSME Dynamic Test Results; (2) Data Base and Analytical Model Development and Application; and (3) Development and Application of Vibration Signature Analysis Techniques.
A flexible influence of affective feelings on creative and analytic performance.
Huntsinger, Jeffrey R; Ray, Cara
2016-09-01
Considerable research shows that positive affect improves performance on creative tasks and negative affect improves performance on analytic tasks. The present research entertained the idea that affective feelings have flexible, rather than fixed, effects on cognitive performance. Consistent with the idea that positive and negative affect signal the value of accessible processing inclinations, the influence of affective feelings on performance on analytic or creative tasks was found to be flexibly responsive to the relative accessibility of different styles of processing (i.e., heuristic vs. systematic, global vs. local). When a global processing orientation was accessible happy participants generated more creative uses for a brick (Experiment 1), successfully solved more remote associates and insight problems (Experiment 2) and displayed broader categorization (Experiment 3) than those in sad moods. When a local processing orientation was accessible this pattern reversed. When a heuristic processing style was accessible happy participants were more likely to commit the conjunction fallacy (Experiment 3) and showed less pronounced anchoring effects (Experiment 4) than sad participants. When a systematic processing style was accessible this pattern reversed. Implications of these results for relevant affect-cognition models are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A parallel computing engine for a class of time critical processes.
Nabhan, T M; Zomaya, A Y
1997-01-01
This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.
Pitting intuitive and analytical thinking against each other: the case of transitivity.
Rusou, Zohar; Zakay, Dan; Usher, Marius
2013-06-01
Identifying which thinking mode, intuitive or analytical, yields better decisions has been a major subject of inquiry by decision-making researchers. Yet studies show contradictory results. One possibility is that the ambiguity is due to the variability in experimental conditions across studies. Our hypothesis is that decision quality depends critically on the level of compatibility between the thinking mode employed in the decision and the nature of the decision-making task. In two experiments, we pitted intuition and analytical thinking against each other on tasks that were either mainly intuitive or mainly analytical. Thinking modes, as well as task characteristics, were manipulated in a factorial design, with choice transitivity as the dependent measure. Results showed higher choice consistency (transitivity) when thinking mode and the characteristics of the decision task were compatible.
Closed-loop, pilot/vehicle analysis of the approach and landing task
NASA Technical Reports Server (NTRS)
Schmidt, D. K.; Anderson, M. R.
1985-01-01
Optimal-control-theoretic modeling and frequency-domain analysis is the methodology proposed to evaluate analytically the handling qualities of higher-order manually controlled dynamic systems. Fundamental to the methodology is evaluating the interplay between pilot workload and closed-loop pilot/vehicle performance and stability robustness. The model-based metric for pilot workload is the required pilot phase compensation. Pilot/vehicle performance and loop stability is then evaluated using frequency-domain techniques. When these techniques were applied to the flight-test data for thirty-two highly-augmented fighter configurations, strong correlation was obtained between the analytical and experimental results.
ERIC Educational Resources Information Center
Kokis, Judite V.; Macpherson, Robyn; Toplak, Maggie E.; West, Richard F.; Stanovich, Keith E.
2002-01-01
Examined developmental and individual differences in tendencies to favor analytic over heuristic responses in three tasks (inductive reasoning, deduction under belief bias conditions, probabilistic reasoning) in children varying in age and cognitive ability. Found significant increases in analytic responding with development on first two tasks.…
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
Cognitive Task Analysis of En Route Air Traffic Control: Model Extension and Validation.
ERIC Educational Resources Information Center
Redding, Richard E.; And Others
Phase II of a project extended data collection and analytic procedures to develop a model of expertise and skill development for en route air traffic control (ATC). New data were collected by recording the Dynamic Simulator (DYSIM) performance of five experts with a work overload problem. Expert controllers were interviewed in depth for mental…
The generation of criteria for selecting analytical tools for landscape management
Marilyn Duffey-Armstrong
1979-01-01
This paper presents an approach to generating criteria for selecting the analytical tools used to assess visual resources for various landscape management tasks. The approach begins by first establishing the overall parameters for the visual assessment task, and follows by defining the primary requirements of the various sets of analytical tools to be used. Finally,...
NASA Astrophysics Data System (ADS)
Fan, Fan; Ma, Yong; Dai, Xiaobing; Mei, Xiaoguang
2018-04-01
Infrared image enhancement is an important and necessary task in the infrared imaging system. In this paper, by defining the contrast in terms of the area between adjacent non-zero histogram, a novel analytical model is proposed to enlarge the areas so that the contrast can be increased. In addition, the analytical model is regularized by a penalty term based on the saliency value to enhance the salient regions as well. Thus, both of the whole images and salient regions can be enhanced, and the rank consistency can be preserved. The comparisons on 8-bit images show that the proposed method can enhance the infrared images with more details.
Optimizing spectral CT parameters for material classification tasks
NASA Astrophysics Data System (ADS)
Rigie, D. S.; La Rivière, P. J.
2016-06-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.
Optimizing Spectral CT Parameters for Material Classification Tasks
Rigie, D. S.; La Rivière, P. J.
2017-01-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430
An Investigation of Large Aircraft Handling Qualities
NASA Astrophysics Data System (ADS)
Joyce, Richard D.
An analytical technique for investigating transport aircraft handling qualities is exercised in a study using models of two such vehicles, a Boeing 747 and Lockheed C-5A. Two flight conditions are employed for climb and directional tasks, and a third included for a flare task. The analysis technique is based upon a "structural model" of the human pilot developed by Hess. The associated analysis procedure has been discussed previously in the literature, but centered almost exclusively on the characteristics of high-performance fighter aircraft. The handling qualities rating level (HQRL) and pilot induced oscillation tendencies rating level (PIORL) are predicted for nominal configurations of the aircraft and for "damaged" configurations where actuator rate limits are introduced as nonlinearites. It is demonstrated that the analysis can accommodate nonlinear pilot/vehicle behavior and do so in the context of specific flight tasks, yielding estimates of handling qualities, pilot-induced oscillation tendencies and upper limits of task performance. A brief human-in-the-loop tracking study was performed to provide a limited validation of the pilot model employed.
Interdisciplinary Program for Quantitative Flaw Definition
1975-09-17
34o XXSZ* or otherwise analytically studied . A flow chart diagram that describes the equipment t"*™11*ŕ™’ and dis £ SStSS used in Task I...shown on Figure F1at Bottomed Holes in Aluminum Samples f «mnles were used to study scattering from flat Two different types of «fP1" ""V^ a 1/8...Analysis Several studies were made using the analytical models for bondlines havingTariius att’uative characteristics. The attenuation ^ion ^s 199
Is clinical cognition binary or continuous?
Norman, Geoffrey; Monteiro, Sandra; Sherbino, Jonathan
2013-08-01
A dominant theory of clinical reasoning is the so-called "dual processing theory," in which the diagnostic process may proceed through a rapid, unconscious, intuitive process (System 1) or a slow, conceptual, analytical process (System 2). Diagnostic errors are thought to arise primarily from cognitive biases originating in System 1. In this issue, Custers points out that this model is unnecessarily restrictive and that it is more likely that diagnostic tasks may proceed through a variety of mental strategies ranging from "analytical" to "intuitive."The authors of this commentary agree that the notion that System 1 and System 2 processes are somehow in competition and will necessarily lead to different conclusions is unnecessarily restrictive. On the other hand, they argue that there is substantial evidence in support of a dual processing model, and that most objections to dual processing theory can be easily accommodated by simply presuming that both processes operate in concertand that solving any task may rely to varying degrees on both processes.
Closed-loop, pilot/vehicle analysis of the approach and landing task
NASA Technical Reports Server (NTRS)
Anderson, M. R.; Schmidt, D. K.
1986-01-01
In the case of approach and landing, it is universally accepted that the pilot uses more than one vehicle response, or output, to close his control loops. Therefore, to model this task, a multi-loop analysis technique is required. The analysis problem has been in obtaining reasonable analytic estimates of the describing functions representing the pilot's loop compensation. Once these pilot describing functions are obtained, appropriate performance and workload metrics must then be developed for the landing task. The optimal control approach provides a powerful technique for obtaining the necessary describing functions, once the appropriate task objective is defined in terms of a quadratic objective function. An approach is presented through the use of a simple, reasonable objective function and model-based metrics to evaluate loop performance and pilot workload. The results of an analysis of the LAHOS (Landing and Approach of Higher Order Systems) study performed by R.E. Smith is also presented.
Distributed computing feasibility in a non-dedicated homogeneous distributed system
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Sun, Xian-He
1993-01-01
The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.
ERIC Educational Resources Information Center
Gan, Zhengdong
2012-01-01
This study, which is part of a large-scale study of using objective measures to validate assessment rating scales and assessment tasks in a high-profile school-based assessment initiative in Hong Kong, examined how grammatical complexity measures relate to task type and analytic evaluations of students' speaking proficiency in a classroom-based…
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
An analytical technique was developed to predict the behavior of a rotor system subjected to sudden unbalance. The technique is implemented in the Turbine Engine Transient Rotor Analysis (TETRA) computer program using the component element method. The analysis was particularly aimed toward blade-loss phenomena in gas turbine engines. A dual-rotor, casing, and pylon structure can be modeled by the computer program. Blade tip rubs, Coriolis forces, and mechanical clearances are included. The analytical system was verified by modeling and simulating actual test conditions for a rig test as well as a full-engine, blade-release demonstration.
A workflow learning model to improve geovisual analytics utility
Roth, Robert E; MacEachren, Alan M; McCabe, Craig A
2011-01-01
Introduction This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. Objectives The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. Methodology The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on the concept of scientific workflows. Second, we implemented an interface in the G-EX Portal Learn Module to demonstrate the workflow learning model. The workflow interface allows users to drag learning artifacts uploaded to the G-EX Portal onto a central whiteboard and then annotate the workflow using text and drawing tools. Once completed, users can visit the assembled workflow to get an idea of the kind, number, and scale of analysis steps, view individual learning artifacts associated with each node in the workflow, and ask questions about the overall workflow or individual learning artifacts through the associated forums. An example learning workflow in the domain of epidemiology is provided to demonstrate the effectiveness of the approach. Results/Conclusions In the context of geovisual analytics, GIScientists are not only responsible for developing software to facilitate visually-mediated reasoning about large and complex spatiotemporal information, but also for ensuring that this software works. The workflow learning model discussed in this paper and demonstrated in the G-EX Portal Learn Module is one approach to improving the utility of geovisual analytics software. While development of the G-EX Portal Learn Module is ongoing, we expect to release the G-EX Portal Learn Module by Summer 2009. PMID:21983545
A workflow learning model to improve geovisual analytics utility.
Roth, Robert E; Maceachren, Alan M; McCabe, Craig A
2009-01-01
INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on the concept of scientific workflows. Second, we implemented an interface in the G-EX Portal Learn Module to demonstrate the workflow learning model. The workflow interface allows users to drag learning artifacts uploaded to the G-EX Portal onto a central whiteboard and then annotate the workflow using text and drawing tools. Once completed, users can visit the assembled workflow to get an idea of the kind, number, and scale of analysis steps, view individual learning artifacts associated with each node in the workflow, and ask questions about the overall workflow or individual learning artifacts through the associated forums. An example learning workflow in the domain of epidemiology is provided to demonstrate the effectiveness of the approach. RESULTS/CONCLUSIONS: In the context of geovisual analytics, GIScientists are not only responsible for developing software to facilitate visually-mediated reasoning about large and complex spatiotemporal information, but also for ensuring that this software works. The workflow learning model discussed in this paper and demonstrated in the G-EX Portal Learn Module is one approach to improving the utility of geovisual analytics software. While development of the G-EX Portal Learn Module is ongoing, we expect to release the G-EX Portal Learn Module by Summer 2009.
A Comparative Analysis of Computer End-User Support in the Air Force and Civilian Organizations
1991-12-01
This explanation implies a further stratification of end users based on the specific tasks they perform, a new model of application combinations, and a...its support efforts to meet the needs of its end-uiser clientele iore closely. 79 INTEGRATED .9 VERBAL ANALYTIC Figure 14. Test Model of Applications ...The IC Model : IEM, Canada. ...............19 Proliferation of ICs ... ............... 20 Services ... ..................... 21 IC States
2010-08-01
students conducting the data capture and data entry, an analytical method known as the Task Load Index ( NASA TLX Version 2.0) was used. This method was...published by the NASA Ames Research Center in December 2003. The entire report can be found at: http://humansystems.arc.nasa.gov/groups/ TLX The...completion of each task in the survey process, surveyors were required to complete a NASA TLX form to report their assessment of the workload for
Heat storage capability of a rolling cylinder using Glauber's salt
NASA Technical Reports Server (NTRS)
Herrick, C. S.; Zarnoch, K. P.
1980-01-01
The rolling cylinder phase change heat storage concept was developed to the point where a prototype design is completed and a cost analysis is prepared. A series of experimental and analytical tasks are defined to establish the thermal, mechanical, and materials behavior of rolling cylinder devices. These tasks include: analyses of internal and external heat transfer; performance and lifetime testing of the phase change materials; corrosion evaluation; development of a mathematical model; and design of a prototype and associated test equipment.
Interaction Junk: User Interaction-Based Evaluation of Visual Analytic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; North, Chris
2012-10-14
With the growing need for visualization to aid users in understanding large, complex datasets, the ability for users to interact and explore these datasets is critical. As visual analytic systems have advanced to leverage powerful computational models and data analytics capabilities, the modes by which users engage and interact with the information are limited. Often, users are taxed with directly manipulating parameters of these models through traditional GUIs (e.g., using sliders to directly manipulate the value of a parameter). However, the purpose of user interaction in visual analytic systems is to enable visual data exploration – where users can focusmore » on their task, as opposed to the tool or system. As a result, users can engage freely in data exploration and decision-making, for the purpose of gaining insight. In this position paper, we discuss how evaluating visual analytic systems can be approached through user interaction analysis, where the goal is to minimize the cognitive translation between the visual metaphor and the mode of interaction (i.e., reducing the “Interactionjunk”). We motivate this concept through a discussion of traditional GUIs used in visual analytics for direct manipulation of model parameters, and the importance of designing interactions the support visual data exploration.« less
Digital forensics: an analytical crime scene procedure model (ACSPM).
Bulbul, Halil Ibrahim; Yavuzcan, H Guclu; Ozel, Mesut
2013-12-10
In order to ensure that digital evidence is collected, preserved, examined, or transferred in a manner safeguarding the accuracy and reliability of the evidence, law enforcement and digital forensic units must establish and maintain an effective quality assurance system. The very first part of this system is standard operating procedures (SOP's) and/or models, conforming chain of custody requirements, those rely on digital forensics "process-phase-procedure-task-subtask" sequence. An acceptable and thorough Digital Forensics (DF) process depends on the sequential DF phases, and each phase depends on sequential DF procedures, respectively each procedure depends on tasks and subtasks. There are numerous amounts of DF Process Models that define DF phases in the literature, but no DF model that defines the phase-based sequential procedures for crime scene identified. An analytical crime scene procedure model (ACSPM) that we suggest in this paper is supposed to fill in this gap. The proposed analytical procedure model for digital investigations at a crime scene is developed and defined for crime scene practitioners; with main focus on crime scene digital forensic procedures, other than that of whole digital investigation process and phases that ends up in a court. When reviewing the relevant literature and interrogating with the law enforcement agencies, only device based charts specific to a particular device and/or more general perspective approaches to digital evidence management models from crime scene to courts are found. After analyzing the needs of law enforcement organizations and realizing the absence of crime scene digital investigation procedure model for crime scene activities we decided to inspect the relevant literature in an analytical way. The outcome of this inspection is our suggested model explained here, which is supposed to provide guidance for thorough and secure implementation of digital forensic procedures at a crime scene. In digital forensic investigations each case is unique and needs special examination, it is not possible to cover every aspect of crime scene digital forensics, but the proposed procedure model is supposed to be a general guideline for practitioners. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Prediction task guided representation learning of medical codes in EHR.
Cui, Liwen; Xie, Xiaolei; Shen, Zuojun
2018-06-18
There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.
Effect(s) of Language Tasks on Severity of Disfluencies in Preschool Children with Stuttering.
Zamani, Peyman; Ravanbakhsh, Majid; Weisi, Farzad; Rashedi, Vahid; Naderi, Sara; Hosseinzadeh, Ayub; Rezaei, Mohammad
2017-04-01
Speech disfluency in children can be increased or decreased depending on the type of linguistic task presented to them. In this study, the effect of sentence imitation and sentence modeling on severity of speech disfluencies in preschool children with stuttering is investigated. In this cross-sectional descriptive analytical study, 58 children with stuttering (29 with mild stuttering and 29 with moderate stuttering) and 58 typical children aged between 4 and 6 years old participated. The severity of speech disfluencies was assessed by SSI-3 and TOCS before and after offering each task. In boys with mild stuttering, The mean stuttering severity scores in two tasks of sentence imitation and sentence modeling were [Formula: see text] and [Formula: see text] respectively ([Formula: see text]). But, in boys with moderate stuttering the stuttering severity in the both tasks were [Formula: see text] and [Formula: see text] respectively ([Formula: see text]). In girls with mild stuttering, the stuttering severity in two tasks of sentence imitation and sentence modeling were [Formula: see text] and [Formula: see text] respectively ([Formula: see text]). But, in girls with moderate stuttering the mean stuttering severity in the both tasks were [Formula: see text] and [Formula: see text] respectively ([Formula: see text]). In both gender of typical children, the score of speech disfluencies had no significant difference between two tasks ([Formula: see text]). In preschool children with mild stuttering and peer non-stutters, performing the tasks of sentence imitation and sentence modeling could not increase the severity of stuttering. But, in preschool children with moderate stuttering, doing the task of sentence modeling increased the stuttering severity score.
Gut feelings as a third track in general practitioners' diagnostic reasoning.
Stolper, Erik; Van de Wiel, Margje; Van Royen, Paul; Van Bokhoven, Marloes; Van der Weijden, Trudy; Dinant, Geert Jan
2011-02-01
General practitioners (GPs) are often faced with complicated, vague problems in situations of uncertainty that they have to solve at short notice. In such situations, gut feelings seem to play a substantial role in their diagnostic process. Qualitative research distinguished a sense of alarm and a sense of reassurance. However, not every GP trusted their gut feelings, since a scientific explanation is lacking. This paper explains how gut feelings arise and function in GPs' diagnostic reasoning. The paper reviews literature from medical, psychological and neuroscientific perspectives. Gut feelings in general practice are based on the interaction between patient information and a GP's knowledge and experience. This is visualized in a knowledge-based model of GPs' diagnostic reasoning emphasizing that this complex task combines analytical and non-analytical cognitive processes. The model integrates the two well-known diagnostic reasoning tracks of medical decision-making and medical problem-solving, and adds gut feelings as a third track. Analytical and non-analytical diagnostic reasoning interacts continuously, and GPs use elements of all three tracks, depending on the task and the situation. In this dual process theory, gut feelings emerge as a consequence of non-analytical processing of the available information and knowledge, either reassuring GPs or alerting them that something is wrong and action is required. The role of affect as a heuristic within the physician's knowledge network explains how gut feelings may help GPs to navigate in a mostly efficient way in the often complex and uncertain diagnostic situations of general practice. Emotion research and neuroscientific data support the unmistakable role of affect in the process of making decisions and explain the bodily sensation of gut feelings.The implications for health care practice and medical education are discussed.
Gut Feelings as a Third Track in General Practitioners’ Diagnostic Reasoning
Van de Wiel, Margje; Van Royen, Paul; Van Bokhoven, Marloes; Van der Weijden, Trudy; Dinant, Geert Jan
2010-01-01
Background General practitioners (GPs) are often faced with complicated, vague problems in situations of uncertainty that they have to solve at short notice. In such situations, gut feelings seem to play a substantial role in their diagnostic process. Qualitative research distinguished a sense of alarm and a sense of reassurance. However, not every GP trusted their gut feelings, since a scientific explanation is lacking. Objective This paper explains how gut feelings arise and function in GPs’ diagnostic reasoning. Approach The paper reviews literature from medical, psychological and neuroscientific perspectives. Conclusions Gut feelings in general practice are based on the interaction between patient information and a GP’s knowledge and experience. This is visualized in a knowledge-based model of GPs’ diagnostic reasoning emphasizing that this complex task combines analytical and non-analytical cognitive processes. The model integrates the two well-known diagnostic reasoning tracks of medical decision-making and medical problem-solving, and adds gut feelings as a third track. Analytical and non-analytical diagnostic reasoning interacts continuously, and GPs use elements of all three tracks, depending on the task and the situation. In this dual process theory, gut feelings emerge as a consequence of non-analytical processing of the available information and knowledge, either reassuring GPs or alerting them that something is wrong and action is required. The role of affect as a heuristic within the physician’s knowledge network explains how gut feelings may help GPs to navigate in a mostly efficient way in the often complex and uncertain diagnostic situations of general practice. Emotion research and neuroscientific data support the unmistakable role of affect in the process of making decisions and explain the bodily sensation of gut feelings.The implications for health care practice and medical education are discussed. PMID:20967509
Verification of Decision-Analytic Models for Health Economic Evaluations: An Overview.
Dasbach, Erik J; Elbasha, Elamin H
2017-07-01
Decision-analytic models for cost-effectiveness analysis are developed in a variety of software packages where the accuracy of the computer code is seldom verified. Although modeling guidelines recommend using state-of-the-art quality assurance and control methods for software engineering to verify models, the fields of pharmacoeconomics and health technology assessment (HTA) have yet to establish and adopt guidance on how to verify health and economic models. The objective of this paper is to introduce to our field the variety of methods the software engineering field uses to verify that software performs as expected. We identify how many of these methods can be incorporated in the development process of decision-analytic models in order to reduce errors and increase transparency. Given the breadth of methods used in software engineering, we recommend a more in-depth initiative to be undertaken (e.g., by an ISPOR-SMDM Task Force) to define the best practices for model verification in our field and to accelerate adoption. Establishing a general guidance for verifying models will benefit the pharmacoeconomics and HTA communities by increasing accuracy of computer programming, transparency, accessibility, sharing, understandability, and trust of models.
BiSet: Semantic Edge Bundling with Biclusters for Sensemaking.
Sun, Maoyuan; Mi, Peng; North, Chris; Ramakrishnan, Naren
2016-01-01
Identifying coordinated relationships is an important task in data analytics. For example, an intelligence analyst might want to discover three suspicious people who all visited the same four cities. Existing techniques that display individual relationships, such as between lists of entities, require repetitious manual selection and significant mental aggregation in cluttered visualizations to find coordinated relationships. In this paper, we present BiSet, a visual analytics technique to support interactive exploration of coordinated relationships. In BiSet, we model coordinated relationships as biclusters and algorithmically mine them from a dataset. Then, we visualize the biclusters in context as bundled edges between sets of related entities. Thus, bundles enable analysts to infer task-oriented semantic insights about potentially coordinated activities. We make bundles as first class objects and add a new layer, "in-between", to contain these bundle objects. Based on this, bundles serve to organize entities represented in lists and visually reveal their membership. Users can interact with edge bundles to organize related entities, and vice versa, for sensemaking purposes. With a usage scenario, we demonstrate how BiSet supports the exploration of coordinated relationships in text analytics.
NASA Astrophysics Data System (ADS)
Sapra, Karan; Gupta, Saurabh; Atchley, Scott; Anantharaj, Valentine; Miller, Ross; Vazhkudai, Sudharshan
2016-04-01
Efficient resource utilization is critical for improved end-to-end computing and workflow of scientific applications. Heterogeneous node architectures, such as the GPU-enabled Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), present us with further challenges. In many HPC applications on Titan, the accelerators are the primary compute engines while the CPUs orchestrate the offloading of work onto the accelerators, and moving the output back to the main memory. On the other hand, applications that do not exploit GPUs, the CPU usage is dominant while the GPUs idle. We utilized Heterogenous Functional Partitioning (HFP) runtime framework that can optimize usage of resources on a compute node to expedite an application's end-to-end workflow. This approach is different from existing techniques for in-situ analyses in that it provides a framework for on-the-fly analysis on-node by dynamically exploiting under-utilized resources therein. We have implemented in the Community Earth System Model (CESM) a new concurrent diagnostic processing capability enabled by the HFP framework. Various single variate statistics, such as means and distributions, are computed in-situ by launching HFP tasks on the GPU via the node local HFP daemon. Since our current configuration of CESM does not use GPU resources heavily, we can move these tasks to GPU using the HFP framework. Each rank running the atmospheric model in CESM pushes the variables of of interest via HFP function calls to the HFP daemon. This node local daemon is responsible for receiving the data from main program and launching the designated analytics tasks on the GPU. We have implemented these analytics tasks in C and use OpenACC directives to enable GPU acceleration. This methodology is also advantageous while executing GPU-enabled configurations of CESM when the CPUs will be idle during portions of the runtime. In our implementation results, we demonstrate that it is more efficient to use HFP framework to offload the tasks to GPUs instead of doing it in the main application. We observe increased resource utilization and overall productivity in this approach by using HFP framework for end-to-end workflow.
Apanovich, V V; Bezdenezhnykh, B N; Sams, M; Jääskeläinen, I P; Alexandrov, YuI
2018-01-01
It has been presented that Western cultures (USA, Western Europe) are mostly characterized by competitive forms of social interaction, whereas Eastern cultures (Japan, China, Russia) are mostly characterized by cooperative forms. It has also been stated that thinking in Eastern countries is predominantly holistic and in Western countries analytic. Based on this, we hypothesized that subjects with analytic vs. holistic thinking styles show differences in decision making in different types of social interaction conditions. We investigated behavioural and brain-activity differences between subjects with analytic and holistic thinking during a choice reaction time (ChRT) task, wherein the subjects either cooperated, competed (in pairs), or performed the task without interaction with other participants. Healthy Russian subjects (N=78) were divided into two groups based on having analytic or holistic thinking as determined with an established questionnaire. We measured reaction times as well as event-related brain potentials. There were significant differences between the interaction conditions in task performance between subjects with analytic and holistic thinking. Both behavioral performance and physiological measures exhibited higher variance in holistic than in analytic subjects. Differences in amplitude and P300 latency suggest that decision making was easier for the holistic subjects in the cooperation condition, in contrast to analytic subjects for whom decision making based on these measures seemed to be easier in the competition condition. The P300 amplitude was higher in the individual condition as compared with the collective conditions. Overall, our results support the notion that the brains of analytic and holistic subjects work differently in different types of social interaction conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Thermal and Fluid Modeling of the CRYogenic Orbital TEstbed (CRYOTE) Ground Test Article (GTA)
NASA Technical Reports Server (NTRS)
Piryk, David; Schallhorn, Paul; Walls, Laurie; Stopnitzky, Benny; Rhys, Noah; Wollen, Mark
2012-01-01
The purpose of this study was to anchor thermal and fluid system models to data acquired from a ground test article (GTA) for the CRYogenic Orbital TEstbed - CRYOTE. To accomplish this analysis, it was broken into four primary tasks. These included model development, pre-test predictions, testing support at Marshall Space Flight Center (MSFC} and post-test correlations. Information from MSFC facilitated the task of refining and correlating the initial models. The primary goal of the modeling/testing/correlating efforts was to characterize heat loads throughout the ground test article. Significant factors impacting the heat loads included radiative environments, multi-layer insulation (MLI) performance, tank fill levels, tank pressures, and even contact conductance coefficients. This paper demonstrates how analytical thermal/fluid networks were established, and it includes supporting rationale for specific thermal responses seen during testing.
NASA Technical Reports Server (NTRS)
Tavana, Madjid
1995-01-01
The evaluation and prioritization of Engineering Support Requests (ESR's) is a particularly difficult task at the Kennedy Space Center (KSC) -- Shuttle Project Engineering Office. This difficulty is due to the complexities inherent in the evaluation process and the lack of structured information. The evaluation process must consider a multitude of relevant pieces of information concerning Safety, Supportability, O&M Cost Savings, Process Enhancement, Reliability, and Implementation. Various analytical and normative models developed over the past have helped decision makers at KSC utilize large volumes of information in the evaluation of ESR's. The purpose of this project is to build on the existing methodologies and develop a multiple criteria decision support system that captures the decision maker's beliefs through a series of sequential, rational, and analytical processes. The model utilizes the Analytic Hierarchy Process (AHP), subjective probabilities, the entropy concept, and Maximize Agreement Heuristic (MAH) to enhance the decision maker's intuition in evaluating a set of ESR's.
Decision Making in Adults with ADHD
ERIC Educational Resources Information Center
Montyla, Timo; Still, Johanna; Gullberg, Stina; Del Missier, Fabio
2012-01-01
Objectives: This study examined decision-making competence in ADHD by using multiple decision tasks with varying demands on analytic versus affective processes. Methods: Adults with ADHD and healthy controls completed two tasks of analytic decision making, as measured by the Adult Decision-Making Competence (A-DMC) battery, and two affective…
ERIC Educational Resources Information Center
Crick, Ruth Deakin; Knight, Simon; Barr, Steven
2017-01-01
Central to the mission of most educational institutions is the task of preparing the next generation of citizens to contribute to society. Schools, colleges, and universities value a range of outcomes--e.g., problem solving, creativity, collaboration, citizenship, service to community--as well as academic outcomes in traditional subjects. Often…
Brain-Based Devices for Neuromorphic Computer Systems
2013-07-01
and Deco, G. (2012). Effective Visual Working Memory Capacity: An Emergent Effect from the Neural Dynamics in an Attractor Network. PLoS ONE 7, e42719...models, apply them to a recognition task, and to demonstrate a working memory . In the course of this work a new analytical method for spiking data was...4 3.4 Spiking Neural Model Simulation of Working Memory ..................................... 5 3.5 A Novel Method for Analysis
Neural architecture underlying classification of face perception paradigms.
Laird, Angela R; Riedel, Michael C; Sutherland, Matthew T; Eickhoff, Simon B; Ray, Kimberly L; Uecker, Angela M; Fox, P Mickle; Turner, Jessica A; Fox, Peter T
2015-10-01
We present a novel strategy for deriving a classification system of functional neuroimaging paradigms that relies on hierarchical clustering of experiments archived in the BrainMap database. The goal of our proof-of-concept application was to examine the underlying neural architecture of the face perception literature from a meta-analytic perspective, as these studies include a wide range of tasks. Task-based results exhibiting similar activation patterns were grouped as similar, while tasks activating different brain networks were classified as functionally distinct. We identified four sub-classes of face tasks: (1) Visuospatial Attention and Visuomotor Coordination to Faces, (2) Perception and Recognition of Faces, (3) Social Processing and Episodic Recall of Faces, and (4) Face Naming and Lexical Retrieval. Interpretation of these sub-classes supports an extension of a well-known model of face perception to include a core system for visual analysis and extended systems for personal information, emotion, and salience processing. Overall, these results demonstrate that a large-scale data mining approach can inform the evolution of theoretical cognitive models by probing the range of behavioral manipulations across experimental tasks. Copyright © 2015 Elsevier Inc. All rights reserved.
Distinguishing bias from sensitivity effects in multialternative detection tasks.
Sridharan, Devarajan; Steinmetz, Nicholas A; Moore, Tirin; Knudsen, Eric I
2014-08-21
Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. © 2014 ARVO.
Distinguishing bias from sensitivity effects in multialternative detection tasks
Sridharan, Devarajan; Steinmetz, Nicholas A.; Moore, Tirin; Knudsen, Eric I.
2014-01-01
Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. PMID:25146574
A coordination theory for intelligent machines
NASA Technical Reports Server (NTRS)
Wang, Fei-Yue; Saridis, George N.
1990-01-01
A formal model for the coordination level of intelligent machines is established. The framework of the coordination level investigated consists of one dispatcher and a number of coordinators. The model called coordination structure has been used to describe analytically the information structure and information flow for the coordination activities in the coordination level. Specifically, the coordination structure offers a formalism to (1) describe the task translation of the dispatcher and coordinators; (2) represent the individual process within the dispatcher and coordinators; (3) specify the cooperation and connection among the dispatcher and coordinators; (4) perform the process analysis and evaluation; and (5) provide a control and communication mechanism for the real-time monitor or simulation of the coordination process. A simple procedure for the task scheduling in the coordination structure is presented. The task translation is achieved by a stochastic learning algorithm. The learning process is measured with entropy and its convergence is guaranteed. Finally, a case study of the coordination structure with three coordinators and one dispatcher for a simple intelligent manipulator system illustrates the proposed model and the simulation of the task processes performed on the model verifies the soundness of the theory.
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-08-30
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.
Research on safety evaluation model for in-vehicle secondary task driving.
Jin, Lisheng; Xian, Huacai; Niu, Qingning; Bie, Jing
2015-08-01
This paper presents a new method for evaluating in-vehicle secondary task driving safety. There are five in-vehicle distracter tasks: tuning the radio to a local station, touching the touch-screen telephone menu to a certain song, talking with laboratory assistant, answering a telephone via Bluetooth headset, and finding the navigation system from Ipad4 computer. Forty young drivers completed the driving experiment on a driving simulator. Measures of fixations, saccades, and blinks are collected and analyzed. Based on the measures of driver eye movements which have significant difference between the baseline and secondary task driving conditions, the evaluation index system is built. The Analytic Network Process (ANP) theory is applied for determining the importance weight of the evaluation index in a fuzzy environment. On the basis of the importance weight of the evaluation index, Fuzzy Comprehensive Evaluation (FCE) method is utilized to evaluate the secondary task driving safety. Results show that driving with secondary tasks greatly distracts the driver's attention from road and the evaluation model built in this study could estimate driving safety effectively under different driving conditions. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Eckel, J. S.; Crabtree, M. S.
1984-01-01
Analytical and subjective techniques that are sensitive to the information transmission and processing requirements of individual communications-related tasks are used to assess workload imposed on the aircrew by A-10 communications requirements for civilian transport category aircraft. Communications-related tasks are defined to consist of the verbal exchanges between crews and controllers. Three workload estimating techniques are proposed. The first, an information theoretic analysis, is used to calculate bit values for perceptual, manual, and verbal demands in each communication task. The second, a paired-comparisons technique, obtains subjective estimates of the information processing and memory requirements for specific messages. By combining the results of the first two techniques, a hybrid analytical scale is created. The third, a subjective rank ordering of sequences of communications tasks, provides an overall scaling of communications workload. Recommendations for future research include an examination of communications-induced workload among the air crew and the development of simulation scenarios.
Longitudinal changes in young children’s 0–100 to 0–1000 number-line error signatures
Reeve, Robert A.; Paul, Jacob M.; Butterworth, Brian
2015-01-01
We use a latent difference score (LDS) model to examine changes in young children’s number-line (NL) error signatures (errors marking numbers on a NL) over 18 months. A LDS model (1) overcomes some of the inference limitations of analytic models used in previous research, and in particular (2) provides a more reliable test of hypotheses about the meaning and significance of changes in NL error signatures over time and task. The NL error signatures of 217 6-year-olds’ (on test occasion one) were assessed three times over 18 months, along with their math ability on two occasions. On the first occasion (T1) children completed a 0–100 NL task; on the second (T2) a 0–100 NL and a 0–1000 NL task; on the third (T3) occasion a 0–1000 NL task. On the third and fourth occasions (T3 and T4), children completed mental calculation tasks. Although NL error signatures changed over time, these were predictable from other NL task error signatures, and predicted calculation accuracy at T3, as well as changes in calculation between T3 and T4. Multiple indirect effects (change parameters) showed that associations between initial NL error signatures (0–100 NL) and later mental calculation ability were mediated by error signatures on the 0–1000 NL task. The pattern of findings from the LDS model highlight the value of identifying direct and indirect effects in characterizing changing relationships in cognitive representations over task and time. Substantively, they support the claim that children’s NL error signatures generalize over task and time and thus can be used to predict math ability. PMID:26029152
An analytical framework to assist decision makers in the use of forest ecosystem model predictions
Larocque, Guy R.; Bhatti, Jagtar S.; Ascough, J.C.; Liu, J.; Luckai, N.; Mailly, D.; Archambault, L.; Gordon, Andrew M.
2011-01-01
The predictions from most forest ecosystem models originate from deterministic simulations. However, few evaluation exercises for model outputs are performed by either model developers or users. This issue has important consequences for decision makers using these models to develop natural resource management policies, as they cannot evaluate the extent to which predictions stemming from the simulation of alternative management scenarios may result in significant environmental or economic differences. Various numerical methods, such as sensitivity/uncertainty analyses, or bootstrap methods, may be used to evaluate models and the errors associated with their outputs. However, the application of each of these methods carries unique challenges which decision makers do not necessarily understand; guidance is required when interpreting the output generated from each model. This paper proposes a decision flow chart in the form of an analytical framework to help decision makers apply, in an orderly fashion, different steps involved in examining the model outputs. The analytical framework is discussed with regard to the definition of problems and objectives and includes the following topics: model selection, identification of alternatives, modelling tasks and selecting alternatives for developing policy or implementing management scenarios. Its application is illustrated using an on-going exercise in developing silvicultural guidelines for a forest management enterprise in Ontario, Canada.
Features and characterization needs of rubber composite structures
NASA Technical Reports Server (NTRS)
Tabaddor, Farhad
1989-01-01
Some of the major unique features of rubber composite structures are outlined. The features covered are those related to the material properties, but the analytical features are also briefly discussed. It is essential to recognize these features at the planning stage of any long-range analytical, experimental, or application program. The development of a general and comprehensive program which fully accounts for all the important characteristics of tires, under all the relevant modes of operation, may present a prohibitively expensive and impractical task at the near future. There is therefore a need to develop application methodologies which can utilize the less general models, beyond their theoretical limitations and yet with reasonable reliability, by proper mix of analytical, experimental, and testing activities.
Atrial Model Development and Prototype Simulations: CRADA Final Report on Tasks 3 and 4
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Hara, T.; Zhang, X.; Villongco, C.
2016-10-28
The goal of this CRADA was to develop essential tools needed to simulate human atrial electrophysiology in 3-dimensions using an anatomical image-based anatomy and physiologically detailed human cellular model. The atria were modeled as anisotropic, representing the preferentially longitudinal electrical coupling between myocytes. Across the entire anatomy, cellular electrophysiology was heterogeneous, with left and right atrial myocytes defined differently. Left and right cell types for the “control” case of sinus rhythm (SR) was compared with remodeled electrophysiology and calcium cycling characteristics of chronic atrial fibrillation (cAF). The effects of Isoproterenol (ISO), a beta-adrenergic agonist that represents the functional consequences ofmore » PKA phosphorylation of various ion channels and transporters, was also simulated in SR and cAF to represent atrial activity under physical or emotional stress. Results and findings from Tasks 3 & 4 are described. Tasks 3 and 4 are, respectively: Input parameters prepared for a Cardioid simulation; Report including recommendations for additional scenario development and post-processing analytic strategy.« less
Mihov, Konstantin M; Denzler, Markus; Förster, Jens
2010-04-01
In the last two decades research on the neurophysiological processes of creativity has found contradicting results. Whereas most research suggests right hemisphere dominance in creative thinking, left-hemisphere dominance has also been reported. The present research is a meta-analytic review of the literature to establish how creative thinking relates to relative hemispheric dominance. The analysis was performed on the basis of a non-parametric vote-counting approach and effect-size calculations of Cramer's phi suggest relative dominance of the right hemisphere during creative thinking. Moderator analyses revealed no difference in predominant right-hemispheric activation for verbal vs. figural tasks, holistic vs. analytical tasks, and context-dependent vs. context-independent tasks. Suggestions for further investigations with the meta-analytic and neuroscience methodologies to answer the questions of left hemispheric activation and further moderation of the effects are discussed. Copyright 2009 Elsevier Inc. All rights reserved.
Fault diagnosis in orbital refueling operations
NASA Technical Reports Server (NTRS)
Boy, Guy A.
1988-01-01
Usually, operation manuals are provided for helping astronauts during space operations. These manuals include normal and malfunction procedures. Transferring operation manual knowledge into a computerized form is not a trivial task. This knowledge is generally written by designers or operation engineers and is often quite different from the user logic. The latter is usually a compiled version of the former. Experiments are in progress to assess the user logic. HORSES (Human - Orbital Refueling System - Expert System) is an attempt to include both of these logics in the same tool. It is designed to assist astronauts during monitoring and diagnosis tasks. Basically, HORSES includes a situation recognition level coupled to an analytical diagnoser, and a meta-level working on both of the previous levels. HORSES is a good tool for modeling task models and is also more broadly useful for knowledge design. The presentation is represented by abstract and overhead visuals only.
Kennedy, Kristen M.; Rodrigue, Karen M.; Lindenberger, Ulman; Raz, Naftali
2010-01-01
The effects of advanced age and cognitive resources on the course of skill acquisition are unclear, and discrepancies among studies may reflect limitations of data analytic approaches. We applied a multilevel negative exponential model to skill acquisition data from 80 trials (four 20-trial blocks) of a pursuit rotor task administered to healthy adults (19–80 years old). The analyses conducted at the single-trial level indicated that the negative exponential function described performance well. Learning parameters correlated with measures of task-relevant cognitive resources on all blocks except the last and with age on all blocks after the second. Thus, age differences in motor skill acquisition may evolve in 2 phases: In the first, age differences are collinear with individual differences in task-relevant cognitive resources; in the second, age differences orthogonal to these resources emerge. PMID:20047985
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Stayman, J; Ouadah, S
2015-06-15
Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less
Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H
2013-02-05
An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.
Machine learning of network metrics in ATLAS Distributed Data Management
NASA Astrophysics Data System (ADS)
Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration
2017-10-01
The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.
Intellectual Performance Under Stress
1976-09-01
graduate programs around the country. Taken together these lists constitute a demonstration of the important “ filter down” impact of money spent on...skills research of th 1950’s and ear ly ‘60’s. A major contributing factor to the pursuit of analytic, explanatory models of skilled performance, aside...and in combination with one and with both of the other two tasks. Tracking performance was described by the Crossover Model of McRuer and Krendel (1959
ERIC Educational Resources Information Center
Oliver, Rhonda; Grote, Ellen; Rochecouste, Judith; Exell, Michael
2013-01-01
While needs analyses underpin the design of second language analytic syllabi, the methodologies undertaken are rarely examined. This paper explores the value of multiple data sources and collection methods for developing a needs analysis model to enable vocational education and training teachers to address the needs of Australian Aboriginal…
FAST COGNITIVE AND TASK ORIENTED, ITERATIVE DATA DISPLAY (FACTOID)
2017-06-01
approaches. As a result, the following assumptions guided our efforts in developing modeling and descriptive metrics for evaluation purposes...Application Evaluation . Our analytic workflow for evaluation is to first provide descriptive statistics about applications across metrics (performance...distributions for evaluation purposes because the goal of evaluation is accurate description , not inference (e.g., prediction). Outliers depicted
2012-02-23
time Detect and Characterize Event Multiple Materiel No integration between national biosurveillance systems Could receive disparate signals and the...is very limited in its applicability at this time only being deployable in one city and in the process of being implemented in four more Push models
ERIC Educational Resources Information Center
Andrade, Alejandro; Delandshere, Ginette; Danish, Joshua A.
2016-01-01
One of the challenges many learning scientists face is the laborious task of coding large amounts of video data and consistently identifying social actions, which is time consuming and difficult to accomplish in a systematic and consistent manner. It is easier to catalog observable behaviours (e.g., body motions or gaze) without explicitly…
Understanding eye movements in face recognition using hidden Markov models.
Chuk, Tim; Chan, Antoni B; Hsiao, Janet H
2014-09-16
We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone. © 2014 ARVO.
NASA Technical Reports Server (NTRS)
Levison, William H.
1988-01-01
This study explored application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues. The model was applied to two data bases: (1) a NASA ground based simulation of an air-to-air tracking task in which nonvisual cueing devices were explored, and (2) a ground based and inflight study performed by the Calspan Corporation to explore the effects of simulator delay on attitude tracking performance. The model predicted the major performance trends obtained in both studies. A combined analytical and experimental procedure for exploring simulator fidelity issues is outlined.
Comparison of simulator fidelity model predictions with in-simulator evaluation data
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.
1983-01-01
A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.
Diesel-fired self-pumping water heater
NASA Astrophysics Data System (ADS)
Gertsmann, Joseph
1994-07-01
The object of this project was to study the feasibility of pumping and heating water by sustained oscillatory vaporization and condensation in a fired heat exchanger. Portable field liquid fueled water heaters would facilitate heating water for sanitation, personal hygiene, food service, laundry, equipment maintenance, and decontamination presently available only from larger, less portable, motorized pumping units. The technical tasks consisted of: development of an analytical model, operation of proof-of-principal prototypes, and determination of the thermal and mechanical relationships to evaluate operating range and control characteristics. Four successive pump models were analyzed and tested. The final analytical model gave reasonable agreement with the experimental results, indicating that the actual pumping effect was an order of magnitude lower than originally anticipated. It was concluded that a thermally-activated self pumping water heater based on the proposed principle is not feasible.
Pathways to Identity: Aiding Law Enforcement in Identification Tasks With Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruce, Joseph R.; Scholtz, Jean; Hodges, Duncan
The nature of identity has changed dramatically in recent years, and has grown in complexity. Identities are defined in multiple domains: biological and psychological elements strongly contribute, but also biographical and cyber elements are necessary to complete the picture. Law enforcement is beginning to adjust to these changes, recognizing its importance in criminal justice. The SuperIdentity project seeks to aid law enforcement officials in their identification tasks through research of techniques for discovering identity traits, generation of statistical models of identity and analysis of identity traits through visualization. We present use cases compiled through user interviews in multiple fields, includingmore » law enforcement, as well as the modeling and visualization tools design to aid in those use cases.« less
Decision-making under risk conditions is susceptible to interference by a secondary executive task.
Starcke, Katrin; Pawlikowski, Mirko; Wolf, Oliver T; Altstötter-Gleich, Christine; Brand, Matthias
2011-05-01
Recent research suggests two ways of making decisions: an intuitive and an analytical one. The current study examines whether a secondary executive task interferes with advantageous decision-making in the Game of Dice Task (GDT), a decision-making task with explicit and stable rules that taps executive functioning. One group of participants performed the original GDT solely, two groups performed either the GDT and a 1-back or a 2-back working memory task as a secondary task simultaneously. Results show that the group which performed the GDT and the secondary task with high executive load (2-back) decided less advantageously than the group which did not perform a secondary executive task. These findings give further evidence for the view that decision-making under risky conditions taps into the rational-analytical system which acts in a serial and not parallel way as performance on the GDT is disturbed by a parallel task that also requires executive resources.
Human Factors in Streaming Data Analysis: Challenges and Opportunities for Information Visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Arendt, Dustin L.; Franklin, Lyndsey
State-of-the-art visual analytics models and frameworks mostly assume a static snapshot of the data, while in many cases it is a stream with constant updates and changes. Exploration of streaming data poses unique challenges as machine-level computations and abstractions need to be synchronized with the visual representation of the data and the temporally evolving human insights. In the visual analytics literature, we lack a thorough characterization of streaming data and analysis of the challenges associated with task abstraction, visualization design, and adaptation of the role of human-in-the-loop for exploration of data streams. We aim to fill this gap by conductingmore » a survey of the state-of-the-art in visual analytics of streaming data for systematically describing the contributions and shortcomings of current techniques and analyzing the research gaps that need to be addressed in the future. Our contributions are: i) problem characterization for identifying challenges that are unique to streaming data analysis tasks, ii) a survey and analysis of the state-of-the-art in streaming data visualization research with a focus on the visualization design space for dynamic data and the role of the human-in-the-loop, and iii) reflections on the design-trade-offs for streaming visual analytics techniques and their practical applicability in real-world application scenarios.« less
Kang, Youn-Ah; Stasko, J
2012-12-01
While the formal evaluation of systems in visual analytics is still relatively uncommon, particularly rare are case studies of prolonged system use by domain analysts working with their own data. Conducting case studies can be challenging, but it can be a particularly effective way to examine whether visual analytics systems are truly helping expert users to accomplish their goals. We studied the use of a visual analytics system for sensemaking tasks on documents by six analysts from a variety of domains. We describe their application of the system along with the benefits, issues, and problems that we uncovered. Findings from the studies identify features that visual analytics systems should emphasize as well as missing capabilities that should be addressed. These findings inform design implications for future systems.
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
Climate Analytics as a Service. Chapter 11
NASA Technical Reports Server (NTRS)
Schnase, John L.
2016-01-01
Exascale computing, big data, and cloud computing are driving the evolution of large-scale information systems toward a model of data-proximal analysis. In response, we are developing a concept of climate analytics as a service (CAaaS) that represents a convergence of data analytics and archive management. With this approach, high-performance compute-storage implemented as an analytic system is part of a dynamic archive comprising both static and computationally realized objects. It is a system whose capabilities are framed as behaviors over a static data collection, but where queries cause results to be created, not found and retrieved. Those results can be the product of a complex analysis, but, importantly, they also can be tailored responses to the simplest of requests. NASA's MERRA Analytic Service and associated Climate Data Services API provide a real-world example of climate analytics delivered as a service in this way. Our experiences reveal several advantages to this approach, not the least of which is orders-of-magnitude time reduction in the data assembly task common to many scientific workflows.
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.
What makes us think? A three-stage dual-process model of analytic engagement.
Pennycook, Gordon; Fugelsang, Jonathan A; Koehler, Derek J
2015-08-01
The distinction between intuitive and analytic thinking is common in psychology. However, while often being quite clear on the characteristics of the two processes ('Type 1' processes are fast, autonomous, intuitive, etc. and 'Type 2' processes are slow, deliberative, analytic, etc.), dual-process theorists have been heavily criticized for being unclear on the factors that determine when an individual will think analytically or rely on their intuition. We address this issue by introducing a three-stage model that elucidates the bottom-up factors that cause individuals to engage Type 2 processing. According to the model, multiple Type 1 processes may be cued by a stimulus (Stage 1), leading to the potential for conflict detection (Stage 2). If successful, conflict detection leads to Type 2 processing (Stage 3), which may take the form of rationalization (i.e., the Type 1 output is verified post hoc) or decoupling (i.e., the Type 1 output is falsified). We tested key aspects of the model using a novel base-rate task where stereotypes and base-rate probabilities cued the same (non-conflict problems) or different (conflict problems) responses about group membership. Our results support two key predictions derived from the model: (1) conflict detection and decoupling are dissociable sources of Type 2 processing and (2) conflict detection sometimes fails. We argue that considering the potential stages of reasoning allows us to distinguish early (conflict detection) and late (decoupling) sources of analytic thought. Errors may occur at both stages and, as a consequence, bias arises from both conflict monitoring and decoupling failures. Copyright © 2015 Elsevier Inc. All rights reserved.
Task-Analytic Design of Graphic Presentations
1990-05-18
important premise of Larkin and Simon’s work is that, when comparing alternative presentations, it is fruitful to characterize graphic-based problem solving...using the same information-processing models used to help understand problem solving using other representations [Newell and Simon, 19721...luring execution of graphic presentation- 4 based problem -solving procedures. Chapter 2 reviews other work related to the problem of designing graphic
SPIRE Data-Base Management System
NASA Technical Reports Server (NTRS)
Fuechsel, C. F.
1984-01-01
Spacelab Payload Integration and Rocket Experiment (SPIRE) data-base management system (DBMS) based on relational model of data bases. Data bases typically used for engineering and mission analysis tasks and, unlike most commercially available systems, allow data items and data structures stored in forms suitable for direct analytical computation. SPIRE DBMS designed to support data requests from interactive users as well as applications programs.
Innovation and Cultural Change Task Group Report
2006-05-01
authoritative information sources…common analytic methods… Chief Administrative Officer and Shared Services Problem • Core functions are not well...in a crippling fashion Solution • Chief administrative officer • Shared services (market principles, not primarily consolidation) QDR • Under...Additional Governance Reforms” DoD is said to be considering: – Migrating toward a shared services model for support functions… Defense Business Board
Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C
2013-03-01
Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.
Dasgupta, Aritra; Lee, Joon-Yong; Wilson, Ryan; Lafrance, Robert A; Cramer, Nick; Cook, Kristin; Payne, Samuel
2017-01-01
Combining interactive visualization with automated analytical methods like statistics and data mining facilitates data-driven discovery. These visual analytic methods are beginning to be instantiated within mixed-initiative systems, where humans and machines collaboratively influence evidence-gathering and decision-making. But an open research question is that, when domain experts analyze their data, can they completely trust the outputs and operations on the machine-side? Visualization potentially leads to a transparent analysis process, but do domain experts always trust what they see? To address these questions, we present results from the design and evaluation of a mixed-initiative, visual analytics system for biologists, focusing on analyzing the relationships between familiarity of an analysis medium and domain experts' trust. We propose a trust-augmented design of the visual analytics system, that explicitly takes into account domain-specific tasks, conventions, and preferences. For evaluating the system, we present the results of a controlled user study with 34 biologists where we compare the variation of the level of trust across conventional and visual analytic mediums and explore the influence of familiarity and task complexity on trust. We find that despite being unfamiliar with a visual analytic medium, scientists seem to have an average level of trust that is comparable with the same in conventional analysis medium. In fact, for complex sense-making tasks, we find that the visual analytic system is able to inspire greater trust than other mediums. We summarize the implications of our findings with directions for future research on trustworthiness of visual analytic systems.
Cardaci, Maurizio; Misuraca, Raffaella
2005-08-01
This paper raises some methodological problems in the dual process explanation provided by Wada and Nittono for their 2004 results using the Wason selection task. We maintain that the Nittono rethinking approach is weak and that it should be refined to grasp better the evidence of analytic processes.
Mechelke, Matthias; Herlet, Jonathan; Benz, J Philipp; Schwarz, Wolfgang H; Zverlov, Vladimir V; Liebl, Wolfgang; Kornberger, Petra
2017-12-01
The rising importance of accurately detecting oligosaccharides in biomass hydrolyzates or as ingredients in food, such as in beverages and infant milk products, demands for the availability of tools to sensitively analyze the broad range of available oligosaccharides. Over the last decades, HPAEC-PAD has been developed into one of the major technologies for this task and represents a popular alternative to state-of-the-art LC-MS oligosaccharide analysis. This work presents the first comprehensive study which gives an overview of the separation of 38 analytes as well as enzymatic hydrolyzates of six different polysaccharides focusing on oligosaccharides. The high sensitivity of the PAD comes at cost of its stability due to recession of the gold electrode. By an in-depth analysis of the sensitivity drop over time for 35 analytes, including xylo- (XOS), arabinoxylo- (AXOS), laminari- (LOS), manno- (MOS), glucomanno- (GMOS), and cellooligosaccharides (COS), we developed an analyte-specific one-phase decay model for this effect over time. Using this model resulted in significantly improved data normalization when using an internal standard. Our results thereby allow a quantification approach which takes the inevitable and analyte-specific PAD response drop into account. Graphical abstract HPAEC-PAD analysis of oligosaccharides and determination of PAD response drop leading to an improved data normalization.
Technical, analytical and computer support
NASA Technical Reports Server (NTRS)
1972-01-01
The development of a rigorous mathematical model for the design and performance analysis of cylindrical silicon-germanium thermoelectric generators is reported that consists of two parts, a steady-state (static) and a transient (dynamic) part. The material study task involves the definition and implementation of a material study that aims to experimentally characterize the long term behavior of the thermoelectric properties of silicon-germanium alloys as a function of temperature. Analytical and experimental efforts are aimed at the determination of the sublimation characteristics of silicon germanium alloys and the study of sublimation effects on RTG performance. Studies are also performed on a variety of specific topics on thermoelectric energy conversion.
Testing and Analysis of Sensor Ports
NASA Technical Reports Server (NTRS)
Zhang, M.; Frendi, A.; Thompson, W.; Casiano, M. J.
2016-01-01
This Technical Publication summarizes the work focused on the testing and analysis of sensor ports. The tasks under this contract were divided into three areas: (1) Development of an Analytical Model, (2) Conducting a Set of Experiments, and (3) Obtaining Computational Solutions. Results from the experiment using both short and long sensor ports were obtained using harmonic, random, and frequency sweep plane acoustic waves. An amplification factor of the pressure signal between the port inlet and the back of the port is obtained and compared to models. Comparisons of model and experimental results showed very good agreement.
Oral Motor Abilities Are Task Dependent: A Factor Analytic Approach to Performance Rate.
Staiger, Anja; Schölderle, Theresa; Brendel, Bettina; Bötzel, Kai; Ziegler, Wolfram
2017-01-01
Measures of performance rates in speech-like or volitional nonspeech oral motor tasks are frequently used to draw inferences about articulation rate abnormalities in patients with neurologic movement disorders. The study objective was to investigate the structural relationship between rate measures of speech and of oral motor behaviors different from speech. A total of 130 patients with neurologic movement disorders and 130 healthy subjects participated in the study. Rate data was collected for oral reading (speech), rapid syllable repetition (speech-like), and rapid single articulator movements (nonspeech). The authors used factor analysis to determine whether the different rate variables reflect the same or distinct constructs. The behavioral data were most appropriately captured by a measurement model in which the different task types loaded onto separate latent variables. The data on oral motor performance rates show that speech tasks and oral motor tasks such as rapid syllable repetition or repetitive single articulator movements measure separate traits.
Novel Virtual User Models of Mild Cognitive Impairment for Simulating Dementia
Segkouli, Sofia; Tzovaras, Dimitrios; Tsakiris, Thanos; Tsolaki, Magda; Karagiannidis, Charalampos
2015-01-01
Virtual user modeling research has attempted to address critical issues of human-computer interaction (HCI) such as usability and utility through a large number of analytic, usability-oriented approaches as cognitive models in order to provide users with experiences fitting to their specific needs. However, there is demand for more specific modules embodied in cognitive architecture that will detect abnormal cognitive decline across new synthetic task environments. Also, accessibility evaluation of graphical user interfaces (GUIs) requires considerable effort for enhancing ICT products accessibility for older adults. The main aim of this study is to develop and test virtual user models (VUM) simulating mild cognitive impairment (MCI) through novel specific modules, embodied at cognitive models and defined by estimations of cognitive parameters. Well-established MCI detection tests assessed users' cognition, elaborated their ability to perform multitasks, and monitored the performance of infotainment related tasks to provide more accurate simulation results on existing conceptual frameworks and enhanced predictive validity in interfaces' design supported by increased tasks' complexity to capture a more detailed profile of users' capabilities and limitations. The final outcome is a more robust cognitive prediction model, accurately fitted to human data to be used for more reliable interfaces' evaluation through simulation on the basis of virtual models of MCI users. PMID:26339282
Visualisation and Analytic Strategies for Anticipating the Folding of Nets
ERIC Educational Resources Information Center
Wright, Vince
2016-01-01
Visual and analytic strategies are features of students' schemes for spatial tasks. The strategies used by six students to anticipate the folding of nets were investigated. Evidence suggested that visual and analytic strategies were strongly connected in competent performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skrinak, V.M.
The Eastern Devonian Gas Shales Technology Review is a technology transfer vehicle designed to keep industry and research organizations aware of major happenings in the shales. Four issues were published, and the majority of the readership was found to be operators. Under the other major task in this project, areal and analytic analyses of the basin resulted in reducing the study area by 30% while defining a rectangular coordinate system for the basin. Shale-well cost and economic models were developed and validated, and a simplified flow model was prepared.
An NCI-FDA Interagency Oncology Task Force (IOTF) Molecular Diagnostics Workshop was held on October 30, 2008 in Cambridge, MA, to discuss requirements for analytical validation of protein-based multiplex technologies in the context of its intended use. This workshop developed through NCI's Clinical Proteomic Technologies for Cancer initiative and the FDA focused on technology-specific analytical validation processes to be addressed prior to use in clinical settings. In making this workshop unique, a case study approach was used to discuss issues related to
Dickinson, Dwight; Ramsey, Mary E; Gold, James M
2007-05-01
In focusing on potentially localizable cognitive impairments, the schizophrenia meta-analytic literature has overlooked the largest single impairment: on digit symbol coding tasks. To compare the magnitude of the schizophrenia impairment on coding tasks with impairments on other traditional neuropsychological instruments. MEDLINE and PsycINFO electronic databases and reference lists from identified articles. English-language studies from 1990 to present, comparing performance of patients with schizophrenia and healthy controls on coding tasks and cognitive measures representing at least 2 other cognitive domains. Of 182 studies identified, 40 met all criteria for inclusion in the meta-analysis. Means, standard deviations, and sample sizes were extracted for digit symbol coding and 36 other cognitive variables. In addition, we recorded potential clinical moderator variables, including chronicity/severity, medication status, age, and education, and potential study design moderators, including coding task variant, matching, and study publication date. Main analyses synthesized data from 37 studies comprising 1961 patients with schizophrenia and 1444 comparison subjects. Combination of mean effect sizes across studies by means of a random effects model yielded a weighted mean effect for digit symbol coding of g = -1.57 (95% confidence interval, -1.66 to -1.48). This effect compared with a grand mean effect of g = -0.98 and was significantly larger than effects for widely used measures of episodic memory, executive functioning, and working memory. Moderator variable analyses indicated that clinical and study design differences between studies had little effect on the coding task effect. Comparison with previous meta-analyses suggested that current results were representative of the broader literature. Subsidiary analysis of data from relatives of patients with schizophrenia also suggested prominent coding task impairments in this group. The 5-minute digit symbol coding task, reliable and easy to administer, taps an information processing inefficiency that is a central feature of the cognitive deficit in schizophrenia and deserves systematic investigation.
Jet Noise Modeling for Suppressed and Unsuppressed Aircraft in Simulated Flight
NASA Technical Reports Server (NTRS)
Stone, James R.; Krejsa, Eugene A.; Clark, Bruce J; Berton, Jeffrey J.
2009-01-01
This document describes the development of further extensions and improvements to the jet noise model developed by Modern Technologies Corporation (MTC) for the National Aeronautics and Space Administration (NASA). The noise component extraction and correlation approach, first used successfully by MTC in developing a noise prediction model for two-dimensional mixer ejector (2DME) nozzles under the High Speed Research (HSR) Program, has been applied to dual-stream nozzles, then extended and improved in earlier tasks under this contract. Under Task 6, the coannular jet noise model was formulated and calibrated with limited scale model data, mainly at high bypass ratio, including a limited-range prediction of the effects of mixing-enhancement nozzle-exit chevrons on jet noise. Under Task 9 this model was extended to a wider range of conditions, particularly those appropriate for a Supersonic Business Jet, with an improvement in simulated flight effects modeling and generalization of the suppressor model. In the present task further comparisons are made over a still wider range of conditions from more test facilities. The model is also further generalized to cover single-stream nozzles of otherwise similar configuration. So the evolution of this prediction/analysis/correlation approach has been in a sense backward, from the complex to the simple; but from this approach a very robust capability is emerging. Also from these studies, some observations emerge relative to theoretical considerations. The purpose of this task is to develop an analytical, semi-empirical jet noise prediction method applicable to takeoff, sideline and approach noise of subsonic and supersonic cruise aircraft over a wide size range. The product of this task is an even more consistent and robust model for the Footprint/Radius (FOOTPR) code than even the Task 9 model. The model is validated for a wider range of cases and statistically quantified for the various reference facilities. The possible role of facility effects will thus be documented. Although the comparisons that can be accomplished within the limited resources of this task are not comprehensive, they provide a broad enough sampling to enable NASA to make an informed decision on how much further effort should be expended on such comparisons. The improved finalized model is incorporated into the FOOTPR code. MTC has also supported the adaptation of this code for incorporation in NASA s Aircraft Noise Prediction Program (ANOPP).
Social cognition in schizophrenia: cognitive and affective factors.
Ziv, Ido; Leiser, David; Levine, Joseph
2011-01-01
Social cognition refers to how people conceive, perceive, and draw inferences about mental and emotional states of others in the social world. Previous studies suggest that the concept of social cognition involves several abilities, including those related to affect and cognition. The present study analyses the deficits of individuals with schizophrenia in two areas of social cognition: Theory of Mind (ToM) and emotion recognition and processing. Examining the impairment of these abilities in patients with schizophrenia has the potential to elucidate the neurophysiological regions involved in social cognition and may also have the potential to aid rehabilitation. Two experiments were conducted. Both included the same five tasks: first- and second-level false-belief ToM tasks, emotion inferencing, understanding of irony, and matrix reasoning (a WAIS-R subtest). The matrix reasoning task was administered to evaluate and control for the association of the other tasks with analytic reasoning skills. Experiment 1 involved factor analysis of the task performance of 75 healthy participants. Experiment 2 compared 30 patients with schizophrenia to an equal number of matched controls. Results. (1) The five tasks were clearly divided into two factors corresponding to the two areas of social cognition, ToM and emotion recognition and processing. (2) Schizophrenics' performance was impaired on all tasks, particularly on those loading heavily on the analytic component (matrix reasoning and second-order ToM). (3) Matrix reasoning, second-level ToM (ToM2), and irony were found to distinguish patients from controls, even when all other tasks that revealed significant impairment in the patients' performance were taken into account. The two areas of social cognition examined are related to distinct factors. The mechanism for answering ToM questions (especially ToM2) depends on analytic reasoning capabilities, but the difficulties they present to individuals with schizophrenia are due to other components as well. The impairment in social cognition in schizophrenia stems from deficiencies in several mechanisms, including the ability to think analytically and to process emotion information and cues.
Optimal cooperative control synthesis of active displays
NASA Technical Reports Server (NTRS)
Garg, S.; Schmidt, D. K.
1985-01-01
A technique is developed that is intended to provide a systematic approach to synthesizing display augmentation for optimal manual control in complex, closed-loop tasks. A cooperative control synthesis technique, previously developed to design pilot-optimal control augmentation for the plant, is extended to incorporate the simultaneous design of performance enhancing displays. The technique utilizes an optimal control model of the man in the loop. It is applied to the design of a quickening control law for a display and a simple K/s(2) plant, and then to an F-15 type aircraft in a multi-channel task. Utilizing the closed loop modeling and analysis procedures, the results from the display design algorithm are evaluated and an analytical validation is performed. Experimental validation is recommended for future efforts.
Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.
Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min
2013-12-01
Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.
Heavy vehicle driver workload assessment. Task 3, task analysis data collection
DOT National Transportation Integrated Search
This technical report consists of a collection of task analytic data to support heavy vehicle driver workload assessment and protocol development. Data were collected from professional drivers to provide insights into the following issues: the meanin...
Relative motion of orbiting particles under the influence of perturbing forces. Volume 1: Summary
NASA Technical Reports Server (NTRS)
Eades, J. B., Jr.
1974-01-01
The relative motion for orbiting vehicles, under the influence of various perturbing forces, has been studied to determine what influence these inputs, and others, can have. The analytical tasks are discribed in general terms; the force types considered, are outlined modelled and simulated, and the capabilities of the computer programs which have evolved in support of this work are denoted.
ERIC Educational Resources Information Center
Gumpel, Thomas P.; Nativ-Ari-Am, Hagit
2001-01-01
Two multiple baseline designs were used to evaluate a two-stage model for training four young adults with visual and cognitive impairments to grocery shop. A task-analytical flow chart of the behavioral skills involved in grocery shopping was used to increase completed skill steps and the number of correct items purchased. (Contains references.)…
Design and Analysis of Winglets for Military Aircraft. Phase 2
1977-05-01
determine the effect of the AFFDI/Boeing winglets on the KC-135A’s aerodynamic performance and longitudinal and lateral-directional stability. A... Aerodynamic Synthesis and Flight Research, task 143101, Unified Flight Mechanics Technology, work unit 14310125, Design and Analysis of Winglets for...1 TI LOW-SPEED AERODYNAMIC ANALYSIS OF L ~AFFDLIBOEING WINGLET ON THE KC-135A ......................... 1 1 Description of Analytic Model
VAST Challenge 2016: Streaming Visual Analytics
2016-10-25
understand rapidly evolving situations. To support such tasks, visual analytics solutions must move well beyond systems that simply provide real-time...received. Mini-Challenge 1: Design Challenge Mini-Challenge 1 focused on systems to support security and operational analytics at the Euybia...Challenge 1 was to solicit novel approaches for streaming visual analytics that push the boundaries for what constitutes a visual analytics system , and to
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-01-01
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753
ERIC Educational Resources Information Center
Vlacholia, Maria; Vosniadou, Stella; Roussos, Petros; Salta, Katerina; Kazi, Smaragda; Sigalas, Michael; Tzougraki, Chryssa
2017-01-01
We present two studies that investigated the adoption of visual/spatial and analytic strategies by individuals at different levels of expertise in the area of organic chemistry, using the Visual Analytic Chemistry Task (VACT). The VACT allows the direct detection of analytic strategy use without drawing inferences about underlying mental…
Heavy vehicle driver workload assessment. Task 1, task analysis data and protocols review
DOT National Transportation Integrated Search
This report contains a review of available task analytic data and protocols pertinent to heavy vehicle operation and determination of the availability and relevance of such data to heavy vehicle driver workload assessment. Additionally, a preliminary...
The revelation effect: A meta-analytic test of hypotheses.
Aßfalg, André; Bernstein, Daniel M; Hockley, William
2017-12-01
Judgments can depend on the activity directly preceding them. An example is the revelation effect whereby participants are more likely to claim that a stimulus is familiar after a preceding task, such as solving an anagram, than without a preceding task. We test conflicting predictions of four revelation-effect hypotheses in a meta-analysis of 26 years of revelation-effect research. The hypotheses' predictions refer to three subject areas: (1) the basis of judgments that are subject to the revelation effect (recollection vs. familiarity vs. fluency), (2) the degree of similarity between the task and test item, and (3) the difficulty of the preceding task. We use a hierarchical multivariate meta-analysis to account for dependent effect sizes and variance in experimental procedures. We test the revelation-effect hypotheses with a model selection procedure, where each model corresponds to a prediction of a revelation-effect hypothesis. We further quantify the amount of evidence for one model compared to another with Bayes factors. The results of this analysis suggest that none of the extant revelation-effect hypotheses can fully account for the data. The general vagueness of revelation-effect hypotheses and the scarcity of data were the major limiting factors in our analyses, emphasizing the need for formalized theories and further research into the puzzling revelation effect.
Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V
2013-04-01
Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.
NASA Astrophysics Data System (ADS)
Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V.
2013-04-01
Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.
An investigation of tritium transfer in reactor loops
NASA Astrophysics Data System (ADS)
Ilyasova, O. H.; Mosunova, N. A.
2017-09-01
The work is devoted to the important task of the numerical simulation and analysis of the tritium behaviour in the reactor loops. The simulation was carried out by HYDRA-IBRAE/LM code, which is being developed in Nuclear safety institute of the Russian Academy of Sciences. The code is intended for modeling of the liquid metal flow (sodium, lead and lead-bismuth) on the base of non-homogeneous and non-equilibrium two-fluid model. In order to simulate tritium transfer in the code, the special module has been developed. Module includes the models describing the main phenomena of tritium behaviour in reactor loops: transfer, permeation, leakage, etc. Because of shortage of the experimental data, a lot of analytical tests and comparative calculations were considered. Some of them are presented in this work. The comparison of estimation results and experimental and analytical data demonstrate not only qualitative but also good quantitative agreement. It is possible to confirm that HYDRA-IBRAE/LM code allows modeling tritium transfer in reactor loops.
Instinctive analytics for coalition operations (Conference Presentation)
NASA Astrophysics Data System (ADS)
de Mel, Geeth R.; La Porta, Thomas; Pham, Tien; Pearson, Gavin
2017-05-01
The success of future military coalition operations—be they combat or humanitarian—will increasingly depend on a system's ability to share data and processing services (e.g. aggregation, summarization, fusion), and automatically compose services in support of complex tasks at the network edge. We call such an infrastructure instinctive—i.e., an infrastructure that reacts instinctively to address the analytics task at hand. However, developing such an infrastructure is made complex for the coalition environment due to its dynamism both in terms of user requirements and service availability. In order to address the above challenge, in this paper, we highlight our research vision and sketch some initial solutions into the problem domain. Specifically, we propose means to (1) automatically infer formal task requirements from mission specifications; (2) discover data, services, and their features automatically to satisfy the identified requirements; (3) create and augment shared domain models automatically; (4) efficiently offload services to the network edge and across coalition boundaries adhering to their computational properties and costs; and (5) optimally allocate and adjust services while respecting the constraints of operating environment and service fit. We envision that the research will result in a framework which enables self-description, discover, and assemble capabilities to both data and services in support of coalition mission goals.
Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model
NASA Technical Reports Server (NTRS)
Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.
2002-01-01
A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.
One Giant Leap for Categorizers: One Small Step for Categorization Theory
Smith, J. David; Ell, Shawn W.
2015-01-01
We explore humans’ rule-based category learning using analytic approaches that highlight their psychological transitions during learning. These approaches confirm that humans show qualitatively sudden psychological transitions during rule learning. These transitions contribute to the theoretical literature contrasting single vs. multiple category-learning systems, because they seem to reveal a distinctive learning process of explicit rule discovery. A complete psychology of categorization must describe this learning process, too. Yet extensive formal-modeling analyses confirm that a wide range of current (gradient-descent) models cannot reproduce these transitions, including influential rule-based models (e.g., COVIS) and exemplar models (e.g., ALCOVE). It is an important theoretical conclusion that existing models cannot explain humans’ rule-based category learning. The problem these models have is the incremental algorithm by which learning is simulated. Humans descend no gradient in rule-based tasks. Very different formal-modeling systems will be required to explain humans’ psychology in these tasks. An important next step will be to build a new generation of models that can do so. PMID:26332587
NASA Astrophysics Data System (ADS)
Tong, Rong
As a primary digital library portal for astrophysics researchers, SAO/NASA ADS (Astrophysics Data System) 2.0 interface features several visualization tools such as Author Network and Metrics. This research study involves 20 ADS long term users who participated in a usability and eye tracking research session. Participants first completed a cognitive test, and then performed five tasks in ADS 2.0 where they explored its multiple visualization tools. Results show that over half of the participants were Imagers and half of the participants were Analytic. Cognitive styles were found to have significant impacts on several efficiency-based measures. Analytic-oriented participants were observed to spent shorter time on web pages and apps, made fewer web page changes than less-Analytic-driving participants in performing common tasks, whereas AI (Analytic-Imagery) participants also completed their five tasks faster than non-AI participants. Meanwhile, self-identified Imagery participants were found to be more efficient in their task completion through multiple measures including total time on task, number of mouse clicks, and number of query revisions made. Imagery scores were negatively associated with frequency of confusion and the observed counts of being surprised. Compared to those who did not claimed to be a visual person, self-identified Imagery participants were observed to have significantly less frequency in frustration and hesitation during their task performance. Both demographic variables and past user experiences were found to correlate with task performance; query revision also correlated with multiple time-based measurements. Considered as an indicator of efficiency, query revisions were found to correlate negatively with the rate of complete with ease, and positively with several time-based efficiency measures, rate of complete with some difficulty, and the frequency of frustration. These results provide rich insights into the cognitive styles of ADS' core users, the impact of such styles and demographic attributes on their task performance their affective and cognitive experiences, and their interaction behaviors while using the visualization component of ADS 2.0, and would subsequently contribute to the design of bibliographic retrieval systems for scientists.
Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter
2017-01-01
The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models.
Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter
2017-01-01
The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models. PMID:29062288
Gozzi, Marta; Cherubini, Paolo; Papagno, Costanza; Bricolo, Emanuela
2011-05-01
Previous studies found mixed results concerning the role of working memory (WM) in the gambling task (GT). Here, we aimed at reconciling inconsistencies by showing that the standard version of the task can be solved using intuitive strategies operating automatically, while more complex versions require analytic strategies drawing on executive functions. In Study 1, where good performance on the GT could be achieved using intuitive strategies, participants performed well both with and without a concurrent WM load. In Study 2, where analytical strategies were required to solve a more complex version of the GT, participants without WM load performed well, while participants with WM load performed poorly. In Study 3, where the complexity of the GT was further increased, participants in both conditions performed poorly. In addition to the standard performance measure, we used participants' subjective expected utility, showing that it differs from the standard measure in some important aspects.
Problem-based learning on quantitative analytical chemistry course
NASA Astrophysics Data System (ADS)
Fitri, Noor
2017-12-01
This research applies problem-based learning method on chemical quantitative analytical chemistry, so called as "Analytical Chemistry II" course, especially related to essential oil analysis. The learning outcomes of this course include aspects of understanding of lectures, the skills of applying course materials, and the ability to identify, formulate and solve chemical analysis problems. The role of study groups is quite important in improving students' learning ability and in completing independent tasks and group tasks. Thus, students are not only aware of the basic concepts of Analytical Chemistry II, but also able to understand and apply analytical concepts that have been studied to solve given analytical chemistry problems, and have the attitude and ability to work together to solve the problems. Based on the learning outcome, it can be concluded that the problem-based learning method in Analytical Chemistry II course has been proven to improve students' knowledge, skill, ability and attitude. Students are not only skilled at solving problems in analytical chemistry especially in essential oil analysis in accordance with local genius of Chemistry Department, Universitas Islam Indonesia, but also have skilled work with computer program and able to understand material and problem in English.
International Space Station ECLSS Technical Task Agreement Summary Report
NASA Technical Reports Server (NTRS)
Ray, C. D. (Compiler); Salyer, B. H. (Compiler)
1999-01-01
This Technical Memorandum provides a summary of current work accomplished under Technical Task Agreement (TTA) by the Marshall Space Flight Center (MSFC) regarding the International Space Station (ISS) Environmental Control and Life Support System (ECLSS). Current activities include ECLSS component design and development, computer model development, subsystem/integrated system testing, life testing, and general test support provided to the ISS program. Under ECLSS design, MSFC was responsible for the six major ECLSS functions, specifications and standard, component design and development, and was the architectural control agent for the ISS ECLSS. MSFC was responsible for ECLSS analytical model development. In-house subsystem and system level analysis and testing were conducted in support of the design process, including testing air revitalization, water reclamation and management hardware, and certain nonregenerative systems. The activities described herein were approved in task agreements between MSFC and NASA Headquarters Space Station Program Management Office and their prime contractor for the ISS, Boeing. These MSFC activities are in line to the designing, development, testing, and flight of ECLSS equipment planned by Boeing. MSFC's unique capabilities for performing integrated systems testing and analyses, and its ability to perform some tasks cheaper and faster to support ISS program needs, are the basis for the TTA activities.
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1986-01-01
A research and development (R&D) project often involves a number of decisions that must be made concerning which subset of systems or tasks are to be undertaken to achieve the goal of the R&D project. To help in this decision making, SIMRAND (SIMulation of Research ANd Development Projects) is a methodology for the selection of the optimal subset of systems or tasks to be undertaken on an R&D project. Using alternative networks, the SIMRAND methodology models the alternative subsets of systems or tasks under consideration. Each path through an alternative network represents one way of satisfying the project goals. Equations are developed that relate the system or task variables to the measure of reference. Uncertainty is incorporated by treating the variables of the equations probabilistically as random variables, with cumulative distribution functions assessed by technical experts. Analytical techniques of probability theory are used to reduce the complexity of the alternative networks. Cardinal utility functions over the measure of preference are assessed for the decision makers. A run of the SIMRAND Computer I Program combines, in a Monte Carlo simulation model, the network structure, the equations, the cumulative distribution functions, and the utility functions.
Reevaluating the two-representation model of numerical magnitude processing.
Jiang, Ting; Zhang, Wenfeng; Wen, Wen; Zhu, Haiting; Du, Han; Zhu, Xiangru; Gao, Xuefei; Zhang, Hongchuan; Dong, Qi; Chen, Chuansheng
2016-01-01
One debate in mathematical cognition centers on the single-representation model versus the two-representation model. Using an improved number Stroop paradigm (i.e., systematically manipulating physical size distance), in the present study we tested the predictions of the two models for number magnitude processing. The results supported the single-representation model and, more importantly, explained how a design problem (failure to manipulate physical size distance) and an analytical problem (failure to consider the interaction between congruity and task-irrelevant numerical distance) might have contributed to the evidence used to support the two-representation model. This study, therefore, can help settle the debate between the single-representation and two-representation models.
National facilities study. Volume 3: Mission and requirements model report
NASA Technical Reports Server (NTRS)
1994-01-01
The National Facility Study (NFS) was initiated in 1992 by Daniel S. Goldin, Administrator of NASA as an initiative to develop a comprehensive and integrated long-term plan for future facilities. The resulting, multi-agency NFS consisted of three Task Groups: Aeronautics, Space Operations, and Space Research and Development (R&D) Task Groups. A fourth group, the Engineering and Cost Analysis Task Group, was subsequently added to provide cross-cutting functions, such as assuring consistency in developing an inventory of space facilities. Space facilities decisions require an assessment of current and future needs. Therefore, the two task groups dealing with space developed a consistent model of future space mission programs, operations and R&D. The model is a middle ground baseline constructed for NFS analytical purposes with excursions to cover potential space program strategies. The model includes three major sectors: DOD, civilian government, and commercial space. The model spans the next 30 years because of the long lead times associated with facilities development and usage. This document, Volume 3 of the final NFS report, is organized along the following lines: Executive Summary -- provides a summary view of the 30-year mission forecast and requirements baseline, an overview of excursions from that baseline that were studied, and organization of the report; Introduction -- provides discussions of the methodology used in this analysis; Baseline Model -- provides the mission and requirements model baseline developed for Space Operations and Space R&D analyses; Excursions from the baseline -- reviews the details of variations or 'excursions' that were developed to test the future program projections captured in the baseline; and a Glossary of Acronyms.
NASA Astrophysics Data System (ADS)
Zhou, Weimin; Anastasio, Mark A.
2018-03-01
It has been advocated that task-based measures of image quality (IQ) should be employed to evaluate and optimize imaging systems. Task-based measures of IQ quantify the performance of an observer on a medically relevant task. The Bayesian Ideal Observer (IO), which employs complete statistical information of the object and noise, achieves the upper limit of the performance for a binary signal classification task. However, computing the IO performance is generally analytically intractable and can be computationally burdensome when Markov-chain Monte Carlo (MCMC) techniques are employed. In this paper, supervised learning with convolutional neural networks (CNNs) is employed to approximate the IO test statistics for a signal-known-exactly and background-known-exactly (SKE/BKE) binary detection task. The receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) are compared to those produced by the analytically computed IO. The advantages of the proposed supervised learning approach for approximating the IO are demonstrated.
Foreign body impact event damage formation in composite structures
NASA Technical Reports Server (NTRS)
Bucinell, Ronald B.
1994-01-01
This report discusses a methodology that can be used to assess the effect of foreign body impacts on composite structural integrity. The described effort focuses on modeling the effect of a central impact on a 5 3/4 inch filament wound test article. The discussion will commence with details of the material modeling that was used to establish the input properties for the analytical model. This discussion is followed by an overview of the impact assessment methodology. The progress on this effort to date is reviewed along with a discussion of tasks that have yet to be completed.
Understanding Education Involving Geovisual Analytics
ERIC Educational Resources Information Center
Stenliden, Linnea
2013-01-01
Handling the vast amounts of data and information available in contemporary society is a challenge. Geovisual Analytics provides technology designed to increase the effectiveness of information interpretation and analytical task solving. To date, little attention has been paid to the role such tools can play in education and to the extent to which…
NASA Technical Reports Server (NTRS)
Bogan, Sam
2001-01-01
The first year included a study of the non-visible damage of composite overwrapped pressure vessels with B. Poe of the Materials Branch of Nasa-Langley. Early determinations showed a clear reduction in non-visible damage for thin COPVs when partially pressurized rather than unpressurized. Literature searches on Thicker-wall COPVs revealed surface damage but clearly visible. Analysis of current Analytic modeling indicated that that current COPV models lacked sufficient thickness corrections to predict impact damage. After a comprehensive study of available published data and numerous numerical studies based on observed data from Langley, the analytic framework for modeling the behavior was determined lacking and both Poe and Bogan suggested any short term (3yr) result for Jove would be overly ambitious and emphasis should be placed on transverse shear moduli studies. Transverse shear moduli determination is relevant to the study of fatigue, fracture and aging effects in composite structures. Based on the techniques developed by Daniel & Tsai, Bogan and Gates determined to verify the results for K3B and 8320. A detailed analytic and experimental plan was established and carried out that included variations in layup, width, thickness, and length. As well as loading rate variations to determine effects and relaxation moduli. The additional axial loads during the torsion testing were studied as was the placement of gages along the composite specimen. Of the proposed tasks, all of tasks I and 2 were completed with presentations given at Langley, SEM conferences and ASME/AIAA conferences. Sensitivity issues with the technique associated with the use of servohydraulic test systems for applying the torsional load to the composite specimen limited the torsion range for predictable and repeatable transverse shear properties. Bogan and Gates determined to diverge on research efforts with Gates continuing the experimental testing at Langley and Bogan modeling the apparent non-linear behavior at low torque & angles apparent from the tests.
ERIC Educational Resources Information Center
Tyner, Bryan C.; Fienup, Daniel M.
2016-01-01
Task analyses are ubiquitous to applied behavior analysis interventions, yet little is known about the factors that make them effective. Numerous task analyses have been published in behavior analytic journals for constructing single-subject design graphs; however, learner outcomes using these task analyses may fall short of what could be…
Structural Acoustic Prediction and Interior Noise Control Technology
NASA Technical Reports Server (NTRS)
Mathur, G. P.; Chin, C. L.; Simpson, M. A.; Lee, J. T.; Palumbo, Daniel L. (Technical Monitor)
2001-01-01
This report documents the results of Task 14, "Structural Acoustic Prediction and Interior Noise Control Technology". The task was to evaluate the performance of tuned foam elements (termed Smart Foam) both analytically and experimentally. Results taken from a three-dimensional finite element model of an active, tuned foam element are presented. Measurements of sound absorption and sound transmission loss were taken using the model. These results agree well with published data. Experimental performance data were taken in Boeing's Interior Noise Test Facility where 12 smart foam elements were applied to a 757 sidewall. Several configurations were tested. Noise reductions of 5-10 dB were achieved over the 200-800 Hz bandwidth of the controller. Accelerometers mounted on the panel provided a good reference for the controller. Configurations with far-field error microphones outperformed near-field cases.
More than meets the eye: context effects in word identification.
Masson, M E; Borowsky, R
1998-11-01
The influence of semantic context on word identification was examined using masked target displays. Related prime words enhanced a signal detection measure of sensitivity in making lexical decisions and in determining whether a probe word matched the target word. When line drawings were used as primes, a similar benefit was obtained with the probe task. Although these results suggest that contextual information affects perceptual encoding, this conclusion is questioned on the grounds that sensitivity in these tasks may be determined by independent contributions of perceptual and contextual information. The plausibility of this view is supported by a simulation of the experiments using a connectionist model in which perceptual and semantic information make independent contributions to word identification. The model also predicts results with two other analytic methods that have been used to argue for priming effects on perceptual encoding.
The mere exposure effect and recognition depend on the way you look!
Willems, Sylvie; Dedonder, Jonathan; Van der Linden, Martial
2010-01-01
In line with Whittlesea and Price (2001), we investigated whether the memory effect measured with an implicit memory paradigm (mere exposure effect) and an explicit recognition task depended on perceptual processing strategies, regardless of whether the task required intentional retrieval. We found that manipulation intended to prompt functional implicit-explicit dissociation no longer had a differential effect when we induced similar perceptual strategies in both tasks. Indeed, the results showed that prompting a nonanalytic strategy ensured performance above chance on both tasks. Conversely, inducing an analytic strategy drastically decreased both explicit and implicit performance. Furthermore, we noted that the nonanalytic strategy involved less extensive gaze scanning than the analytic strategy and that memory effects under this processing strategy were largely independent of gaze movement.
Interest-Driven Model for Human Dynamics
NASA Astrophysics Data System (ADS)
Shang, Ming-Sheng; Chen, Guan-Xiong; Dai, Shuang-Xing; Wang, Bing-Hong; Zhou, Tao
2010-04-01
Empirical observations indicate that the interevent time distribution of human actions exhibits heavy-tailed features. The queuing model based on task priorities is to some extent successful in explaining the origin of such heavy tails, however, it cannot explain all the temporal statistics of human behavior especially for the daily entertainments. We propose an interest-driven model, which can reproduce the power-law distribution of interevent time. The exponent can be analytically obtained and is in good accordance with the simulations. This model well explains the observed relationship between activities and power-law exponents, as reported recently for web-based behavior and the instant message communications.
Prediction of aircraft handling qualities using analytical models of the human pilot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1982-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Prediction of aircraft handling qualities using analytical models of the human pilot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1982-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Analytical Chemistry in Russia.
Zolotov, Yuri
2016-09-06
Research in Russian analytical chemistry (AC) is carried out on a significant scale, and the analytical service solves practical tasks of geological survey, environmental protection, medicine, industry, agriculture, etc. The education system trains highly skilled professionals in AC. The development and especially manufacturing of analytical instruments should be improved; in spite of this, there are several good domestic instruments and other satisfy some requirements. Russian AC has rather good historical roots.
Applications of flight control system methods to an advanced combat rotorcraft
NASA Technical Reports Server (NTRS)
Tischler, Mark B.; Fletcher, Jay W.; Morris, Patrick M.; Tucker, George T.
1989-01-01
Advanced flight control system design, analysis, and testing methodologies developed at the Ames Research Center are applied in an analytical and flight test evaluation of the Advanced Digital Optical Control System (ADOCS) demonstrator. The primary objectives are to describe the knowledge gained about the implications of digital flight control system design for rotorcraft, and to illustrate the analysis of the resulting handling-qualities in the context of the proposed new handling-qualities specification for rotorcraft. Topics covered in-depth are digital flight control design and analysis methods, flight testing techniques, ADOCS handling-qualities evaluation results, and correlation of flight test results with analytical models and the proposed handling-qualities specification. The evaluation of the ADOCS demonstrator indicates desirable response characteristics based on equivalent damping and frequency, but undersirably large effective time-delays (exceeding 240 m sec in all axes). Piloted handling-qualities are found to be desirable or adequate for all low, medium, and high pilot gain tasks; but handling-qualities are inadequate for ultra-high gain tasks such as slope and running landings.
Updating the Finite Element Model of the Aerostructures Test Wing Using Ground Vibration Test Data
NASA Technical Reports Server (NTRS)
Lung, Shun-Fat; Pak, Chan-Gi
2009-01-01
Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the aerostructures test wing (ATW), which was designed and tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.
Updating the Finite Element Model of the Aerostructures Test Wing using Ground Vibration Test Data
NASA Technical Reports Server (NTRS)
Lung, Shun-fat; Pak, Chan-gi
2009-01-01
Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the Aerostructures Test Wing (ATW), which was designed and tested at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center (DFRC) (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.
From goal motivation to goal progress: the mediating role of coping in the Self-Concordance Model.
Gaudreau, Patrick; Carraro, Natasha; Miranda, Dave
2012-01-01
The present studies examined the mediating role of self-regulatory mechanisms in the relationship between goal motivation and goal progress in the Self-Concordance Model. First, a systematic review, using meta-analytical path analysis, supported the mediating role of effort and action planning in the positive association between autonomous goal motivation and goal progress. Second, results from two additional empirical studies, using structural equation modeling, lent credence to the mediating role of coping in the relationship between goal motivation and goal progress of university students. Autonomous goal motivation was positively associated with task-oriented coping, which predicted greater goal progress during midterm exams (Study 1, N=702) and at the end of the semester in a different sample (Study 2, N=167). Controlled goal motivation was associated with greater disengagement-oriented coping (Study 1 and Study 2) and lesser use of task-oriented coping (Study 2), which reduced goal progress. These results held up after controlling for perceived stress (Study 2). Our findings highlight the importance of coping in the "inception-to-attainment" goal process because autonomous goal motivation indirectly rather than directly predicts goal progress of university students through their usage of task-oriented coping.
Holistic versus Analytic Evaluation of EFL Writing: A Case Study
ERIC Educational Resources Information Center
Ghalib, Thikra K.; Al-Hattami, Abdulghani A.
2015-01-01
This paper investigates the performance of holistic and analytic scoring rubrics in the context of EFL writing. Specifically, the paper compares EFL students' scores on a writing task using holistic and analytic scoring rubrics. The data for the study was collected from 30 participants attending an English undergraduate program in a Yemeni…
Analytical reasoning task reveals limits of social learning in networks
Rahwan, Iyad; Krasnoshtan, Dmytro; Shariff, Azim; Bonnefon, Jean-François
2014-01-01
Social learning—by observing and copying others—is a highly successful cultural mechanism for adaptation, outperforming individual information acquisition and experience. Here, we investigate social learning in the context of the uniquely human capacity for reflective, analytical reasoning. A hallmark of the human mind is its ability to engage analytical reasoning, and suppress false associative intuitions. Through a set of laboratory-based network experiments, we find that social learning fails to propagate this cognitive strategy. When people make false intuitive conclusions and are exposed to the analytic output of their peers, they recognize and adopt this correct output. But they fail to engage analytical reasoning in similar subsequent tasks. Thus, humans exhibit an ‘unreflective copying bias’, which limits their social learning to the output, rather than the process, of their peers’ reasoning—even when doing so requires minimal effort and no technical skill. In contrast to much recent work on observation-based social learning, which emphasizes the propagation of successful behaviour through copying, our findings identify a limit on the power of social networks in situations that require analytical reasoning. PMID:24501275
Analytical reasoning task reveals limits of social learning in networks.
Rahwan, Iyad; Krasnoshtan, Dmytro; Shariff, Azim; Bonnefon, Jean-François
2014-04-06
Social learning-by observing and copying others-is a highly successful cultural mechanism for adaptation, outperforming individual information acquisition and experience. Here, we investigate social learning in the context of the uniquely human capacity for reflective, analytical reasoning. A hallmark of the human mind is its ability to engage analytical reasoning, and suppress false associative intuitions. Through a set of laboratory-based network experiments, we find that social learning fails to propagate this cognitive strategy. When people make false intuitive conclusions and are exposed to the analytic output of their peers, they recognize and adopt this correct output. But they fail to engage analytical reasoning in similar subsequent tasks. Thus, humans exhibit an 'unreflective copying bias', which limits their social learning to the output, rather than the process, of their peers' reasoning-even when doing so requires minimal effort and no technical skill. In contrast to much recent work on observation-based social learning, which emphasizes the propagation of successful behaviour through copying, our findings identify a limit on the power of social networks in situations that require analytical reasoning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franklin, Lyndsey; Pirrung, Megan A.; Blaha, Leslie M.
Cyber network analysts follow complex processes in their investigations of potential threats to their network. Much research is dedicated to providing automated tool support in the effort to make their tasks more efficient, accurate, and timely. This tool support comes in a variety of implementations from machine learning algorithms that monitor streams of data to visual analytic environments for exploring rich and noisy data sets. Cyber analysts, however, often speak of a need for tools which help them merge the data they already have and help them establish appropriate baselines against which to compare potential anomalies. Furthermore, existing threat modelsmore » that cyber analysts regularly use to structure their investigation are not often leveraged in support tools. We report on our work with cyber analysts to understand they analytic process and how one such model, the MITRE ATT&CK Matrix [32], is used to structure their analytic thinking. We present our efforts to map specific data needed by analysts into the threat model to inform our eventual visualization designs. We examine data mapping for gaps where the threat model is under-supported by either data or tools. We discuss these gaps as potential design spaces for future research efforts. We also discuss the design of a prototype tool that combines machine-learning and visualization components to support cyber analysts working with this threat model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittens, Alex; Devarakonda, Aditya; Racah, Evan
We explore the trade-offs of performing linear algebra using Apache Spark, compared to traditional C and MPI implementations on HPC platforms. Spark is designed for data analytics on cluster computing platforms with access to local disks and is optimized for data-parallel tasks. We examine three widely-used and important matrix factorizations: NMF (for physical plausibility), PCA (for its ubiquity) and CX (for data interpretability). We apply these methods to 1.6TB particle physics, 2.2TB and 16TB climate modeling and 1.1TB bioimaging data. The data matrices are tall-and-skinny which enable the algorithms to map conveniently into Spark’s data parallel model. We perform scalingmore » experiments on up to 1600 Cray XC40 nodes, describe the sources of slowdowns, and provide tuning guidance to obtain high performance.« less
Kwak, Youngbin; Payne, John W; Cohen, Andrew L; Huettel, Scott A
2015-01-01
Adolescence is often viewed as a time of irrational, risky decision-making - despite adolescents' competence in other cognitive domains. In this study, we examined the strategies used by adolescents (N=30) and young adults (N=47) to resolve complex, multi-outcome economic gambles. Compared to adults, adolescents were more likely to make conservative, loss-minimizing choices consistent with economic models. Eye-tracking data showed that prior to decisions, adolescents acquired more information in a more thorough manner; that is, they engaged in a more analytic processing strategy indicative of trade-offs between decision variables. In contrast, young adults' decisions were more consistent with heuristics that simplified the decision problem, at the expense of analytic precision. Collectively, these results demonstrate a counter-intuitive developmental transition in economic decision making: adolescents' decisions are more consistent with rational-choice models, while young adults more readily engage task-appropriate heuristics.
Kwak, Youngbin; Payne, John W.; Cohen, Andrew L.; Huettel, Scott A.
2015-01-01
Adolescence is often viewed as a time of irrational, risky decision-making – despite adolescents' competence in other cognitive domains. In this study, we examined the strategies used by adolescents (N=30) and young adults (N=47) to resolve complex, multi-outcome economic gambles. Compared to adults, adolescents were more likely to make conservative, loss-minimizing choices consistent with economic models. Eye-tracking data showed that prior to decisions, adolescents acquired more information in a more thorough manner; that is, they engaged in a more analytic processing strategy indicative of trade-offs between decision variables. In contrast, young adults' decisions were more consistent with heuristics that simplified the decision problem, at the expense of analytic precision. Collectively, these results demonstrate a counter-intuitive developmental transition in economic decision making: adolescents' decisions are more consistent with rational-choice models, while young adults more readily engage task-appropriate heuristics. PMID:26388664
Hybrid neural network and fuzzy logic approaches for rendezvous and capture in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Castellano, Timothy
1991-01-01
The nonlinear behavior of many practical systems and unavailability of quantitative data regarding the input-output relations makes the analytical modeling of these systems very difficult. On the other hand, approximate reasoning-based controllers which do not require analytical models have demonstrated a number of successful applications such as the subway system in the city of Sendai. These applications have mainly concentrated on emulating the performance of a skilled human operator in the form of linguistic rules. However, the process of learning and tuning the control rules to achieve the desired performance remains a difficult task. Fuzzy Logic Control is based on fuzzy set theory. A fuzzy set is an extension of a crisp set. Crisp sets only allow full membership or no membership at all, whereas fuzzy sets allow partial membership. In other words, an element may partially belong to a set.
Of mental models, assumptions and heuristics: The case of acids and acid strength
NASA Astrophysics Data System (ADS)
McClary, Lakeisha Michelle
This study explored what cognitive resources (i.e., units of knowledge necessary to learn) first-semester organic chemistry students used to make decisions about acid strength and how those resources guided the prediction, explanation and justification of trends in acid strength. We were specifically interested in the identifying and characterizing the mental models, assumptions and heuristics that students relied upon to make their decisions, in most cases under time constraints. The views about acids and acid strength were investigated for twenty undergraduate students. Data sources for this study included written responses and individual interviews. The data was analyzed using a qualitative methodology to answer five research questions. Data analysis regarding these research questions was based on existing theoretical frameworks: problem representation (Chi, Feltovich & Glaser, 1981), mental models (Johnson-Laird, 1983); intuitive assumptions (Talanquer, 2006), and heuristics (Evans, 2008). These frameworks were combined to develop the framework from which our data were analyzed. Results indicated that first-semester organic chemistry students' use of cognitive resources was complex and dependent on their understanding of the behavior of acids. Expressed mental models were generated using prior knowledge and assumptions about acids and acid strength; these models were then employed to make decisions. Explicit and implicit features of the compounds in each task mediated participants' attention, which triggered the use of a very limited number of heuristics, or shortcut reasoning strategies. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength.
Predictive classification of self-paced upper-limb analytical movements with EEG.
Ibáñez, Jaime; Serrano, J I; del Castillo, M D; Minguez, J; Pons, J L
2015-11-01
The extent to which the electroencephalographic activity allows the characterization of movements with the upper limb is an open question. This paper describes the design and validation of a classifier of upper-limb analytical movements based on electroencephalographic activity extracted from intervals preceding self-initiated movement tasks. Features selected for the classification are subject specific and associated with the movement tasks. Further tests are performed to reject the hypothesis that other information different from the task-related cortical activity is being used by the classifiers. Six healthy subjects were measured performing self-initiated upper-limb analytical movements. A Bayesian classifier was used to classify among seven different kinds of movements. Features considered covered the alpha and beta bands. A genetic algorithm was used to optimally select a subset of features for the classification. An average accuracy of 62.9 ± 7.5% was reached, which was above the baseline level observed with the proposed methodology (30.2 ± 4.3%). The study shows how the electroencephalography carries information about the type of analytical movement performed with the upper limb and how it can be decoded before the movement begins. In neurorehabilitation environments, this information could be used for monitoring and assisting purposes.
Development of the biology card sorting task to measure conceptual expertise in biology.
Smith, Julia I; Combs, Elijah D; Nagami, Paul H; Alto, Valerie M; Goh, Henry G; Gourdet, Muryam A A; Hough, Christina M; Nickell, Ashley E; Peer, Adrian G; Coley, John D; Tanner, Kimberly D
2013-01-01
There are widespread aspirations to focus undergraduate biology education on teaching students to think conceptually like biologists; however, there is a dearth of assessment tools designed to measure progress from novice to expert biological conceptual thinking. We present the development of a novel assessment tool, the Biology Card Sorting Task, designed to probe how individuals organize their conceptual knowledge of biology. While modeled on tasks from cognitive psychology, this task is unique in its design to test two hypothesized conceptual frameworks for the organization of biological knowledge: 1) a surface feature organization focused on organism type and 2) a deep feature organization focused on fundamental biological concepts. In this initial investigation of the Biology Card Sorting Task, each of six analytical measures showed statistically significant differences when used to compare the card sorting results of putative biological experts (biology faculty) and novices (non-biology major undergraduates). Consistently, biology faculty appeared to sort based on hypothesized deep features, while non-biology majors appeared to sort based on either surface features or nonhypothesized organizational frameworks. Results suggest that this novel task is robust in distinguishing populations of biology experts and biology novices and may be an adaptable tool for tracking emerging biology conceptual expertise.
The Global War on Terrorism: Analytical Support, Tools and Metrics of Assessment. MORS Workshop
2005-08-11
is the matter of intelligence, as COL(P) Keller pointed out, we need to spend less time in the intelligence cycle on managing information and...models, decision aids: "named things " * Methodologies: potentially useful things "* Resources: databases, people, books? * Meta-data on tools * Develop a...experience. Only one member (Mr. Garry Greco) had served on the Joint Intelligence Task Force for Counter Terrorism. Although Gary heavily participated
Closed-form solution of the Ogden-Hill's compressible hyperelastic model for ramp loading
NASA Astrophysics Data System (ADS)
Berezvai, Szabolcs; Kossa, Attila
2017-05-01
This article deals with the visco-hyperelastic modelling approach for compressible polymer foam materials. Polymer foams can exhibit large elastic strains and displacements in case of volumetric compression. In addition, they often show significant rate-dependent properties. This material behaviour can be accurately modelled using the visco-hyperelastic approach, in which the large strain viscoelastic description is combined with the rate-independent hyperelastic material model. In case of polymer foams, the most widely used compressible hyperelastic material model, the so-called Ogden-Hill's model, was applied, which is implemented in the commercial finite element (FE) software Abaqus. The visco-hyperelastic model is defined in hereditary integral form, therefore, obtaining a closed-form solution for the stress is not a trivial task. However, the parameter-fitting procedure could be much faster and accurate if closed-form solution exists. In this contribution, exact stress solutions are derived in case of uniaxial, biaxial and volumetric compression loading cases using ramp-loading history. The analytical stress solutions are compared with the stress results in Abaqus using FE analysis. In order to highlight the benefits of the analytical closed-form solution during the parameter-fitting process experimental work has been carried out on a particular open-cell memory foam material. The results of the material identification process shows significant accuracy improvement in the fitting procedure by applying the derived analytical solutions compared to the so-called separated approach applied in the engineering practice.
An analytical approach for predicting pilot induced oscillations
NASA Technical Reports Server (NTRS)
Hess, R. A.
1981-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion or determining the susceptability of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
A pilot modeling technique for handling-qualities research
NASA Technical Reports Server (NTRS)
Hess, R. A.
1980-01-01
A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.
Psotta, Rudolf; Abdollahipour, Reza
2017-12-01
The Movement Assessment Battery for Children-2nd Edition (MABC-2) is a test of motor development, widely used in clinical and research settings. To address which motor abilities are actually captured by the motor tasks in the two age versions of the MABC-2, the AB2 for 7- 10-year-olds and the AB3 for 11- 16-year-olds, we examined AB2 and AB3 factorial validity. We conducted confirmatory factor analysis (SPSS AMOS 22.0) on data from the test's standardization samples of children aged 7-10, n = 483, and 11-16, n = 674, in order to find the best fitting models. The covariance matrix of AB2 and AB3 fit a three-factor model that included tasks of manual dexterity, aiming and catching, and balance. However, factor analytic models fitting AB2 and AB3 did not involve the dynamic balance tasks of hopping with the better leg and hopping with the other leg; and the drawing trail showed very low factor validity. In sum, both AB2 and AB3 of the MABC-2 test are able to discriminate between the three specific motor abilities; but due to questionable psychometric quality, the drawing trail and hopping tasks should be modified to improve the construct validity for both age versions of the MABC-2.
Advanced Video Activity Analytics (AVAA): Human Factors Evaluation
2015-05-01
video, and 3) creating and saving annotations (Fig. 11). (The logging program was updated after the pilot to also capture search clicks.) Playing and... visual search task and the auditory task together and thus automatically focused on the visual task. Alternatively, the operator may have intentionally...affect performance on the primary task; however, in the current test there was no apparent effect on the operator’s performance in the visual search task
Does overgeneral autobiographical memory result from poor memory for task instructions?
Yanes, Paula K; Roberts, John E; Carlos, Erica L
2008-10-01
Considerable previous research has shown that retrieval of overgeneral autobiographical memories (OGM) is elevated among individuals suffering from various emotional disorders and those with a history of trauma. Although previous theories suggest that OGM serves the function of regulating acute negative affect, it is also possible that OGM results from difficulties in keeping the instruction set for the Autobiographical Memory Test (AMT) in working memory, or what has been coined "secondary goal neglect" (Dalgleish, 2004). The present study tested whether OGM is associated with poor memory for the task's instruction set, and whether an instruction set reminder would improve memory specificity over repeated trials. Multilevel modelling data-analytic techniques demonstrated a significant relationship between poor recall of instruction set and probability of retrieving OGMs. Providing an instruction set reminder for the AMT relative to a control task's instruction set improved memory specificity immediately afterward.
Ayal, Shahar; Rusou, Zohar; Zakay, Dan; Hochman, Guy
2015-01-01
A framework is presented to better characterize the role of individual differences in information processing style and their interplay with contextual factors in determining decision making quality. In Experiment 1, we show that individual differences in information processing style are flexible and can be modified by situational factors. Specifically, a situational manipulation that induced an analytical mode of thought improved decision quality. In Experiment 2, we show that this improvement in decision quality is highly contingent on the compatibility between the dominant thinking mode and the nature of the task. That is, encouraging an intuitive mode of thought led to better performance on an intuitive task but hampered performance on an analytical task. The reverse pattern was obtained when an analytical mode of thought was encouraged. We discuss the implications of these results for the assessment of decision making competence, and suggest practical directions to help individuals better adjust their information processing style to the situation at hand and make optimal decisions. PMID:26284011
Final report for 105-N Basin sediment disposition task, phase 2 -- samples BOMPC8 and BOMPC9
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esch, R.A.
1998-02-05
This document is the final report deliverable for Phase 2 analytical work for the 105-N Basin Sediment Disposition Task. On December 23, 1997, ten samples were received at the 222-S Laboratory as follows: two (2) bottles of potable water, six (6) samples for process control testing and two (2) samples for characterization. Analyses were performed in accordance with the Letter of Instruction for Phase 2 Analytical Work for the 105-N Basin Sediment Disposition Task (Logan and Kessner, 1997) (Attachment 7) and 105-N Basin Sediment Disposition Phase-Two Sampling and Analysis Plan (SAP) (Smith, 1997). The analytical results are included in Tablemore » 1. This document provides the values of X/Qs for the onsite and offsite receptors, taking into account the building wake and the atmospheric stability effects. X/Qs values for the potential fire accident were also calculated. In addition, the unit dose were calculated for the mixtures of isotopes.« less
Ayal, Shahar; Rusou, Zohar; Zakay, Dan; Hochman, Guy
2015-01-01
A framework is presented to better characterize the role of individual differences in information processing style and their interplay with contextual factors in determining decision making quality. In Experiment 1, we show that individual differences in information processing style are flexible and can be modified by situational factors. Specifically, a situational manipulation that induced an analytical mode of thought improved decision quality. In Experiment 2, we show that this improvement in decision quality is highly contingent on the compatibility between the dominant thinking mode and the nature of the task. That is, encouraging an intuitive mode of thought led to better performance on an intuitive task but hampered performance on an analytical task. The reverse pattern was obtained when an analytical mode of thought was encouraged. We discuss the implications of these results for the assessment of decision making competence, and suggest practical directions to help individuals better adjust their information processing style to the situation at hand and make optimal decisions.
Sanabria, Federico; Killeen, Peter R
2008-01-01
Background The inability to inhibit reinforced responses is a defining feature of ADHD associated with impulsivity. The Spontaneously Hypertensive Rat (SHR) has been extolled as an animal model of ADHD, but there is no clear experimental evidence of inhibition deficits in SHR. Attempts to demonstrate these deficits may have suffered from methodological and analytical limitations. Methods We provide a rationale for using two complementary response-withholding tasks to doubly dissociate impulsivity from motivational and motor processes. In the lever-holding task (LHT), continual lever depression was required for a minimum interval. Under a differential reinforcement of low rates schedule (DRL), a minimum interval was required between lever presses. Both tasks were studied using SHR and two normotensive control strains, Wistar-Kyoto (WKY) and Long Evans (LE), over an overlapping range of intervals (1 – 5 s for LHT and 5 – 60 s for DRL). Lever-holding and DRL performance was characterized as the output of a mixture of two processes, timing and iterative random responding; we call this account of response inhibition the Temporal Regulation (TR) model. In the context of TR, impulsivity was defined as a bias toward premature termination of the timed intervals. Results The TR model provided an accurate description of LHT and DRL performance. On the basis of TR parameter estimates, SHRs were more impulsive than LE rats across tasks and target times. WKY rats produced substantially shorter timed responses in the lever-holding task than in DRL, suggesting a motivational or motor deficit. The precision of timing by SHR, as measured by the variance of their timed intervals, was excellent, flouting expectations from ADHD research. Conclusion This research validates the TR model of response inhibition and supports SHR as an animal model of ADHD-related impulsivity. It indicates, however, that SHR's impulse-control deficit is not caused by imprecise timing. The use of ad hoc impulsivity metrics and of WKY as control strain for SHR impulsivity are called into question. PMID:18261220
ClimateSpark: An in-memory distributed computing framework for big climate data analytics
NASA Astrophysics Data System (ADS)
Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei
2018-06-01
The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.
Analytic and heuristic processes in the detection and resolution of conflict.
Ferreira, Mário B; Mata, André; Donkin, Christopher; Sherman, Steven J; Ihmels, Max
2016-10-01
Previous research with the ratio-bias task found larger response latencies for conflict trials where the heuristic- and analytic-based responses are assumed to be in opposition (e.g., choosing between 1/10 and 9/100 ratios of success) when compared to no-conflict trials where both processes converge on the same response (e.g., choosing between 1/10 and 11/100). This pattern is consistent with parallel dual-process models, which assume that there is effective, rather than lax, monitoring of the output of heuristic processing. It is, however, unclear why conflict resolution sometimes fails. Ratio-biased choices may increase because of a decline in analytical reasoning (leaving heuristic-based responses unopposed) or to a rise in heuristic processing (making it more difficult for analytic processes to override the heuristic preferences). Using the process-dissociation procedure, we found that instructions to respond logically and response speed affected analytic (controlled) processing (C), leaving heuristic processing (H) unchanged, whereas the intuitive preference for large nominators (as assessed by responses to equal ratio trials) affected H but not C. These findings create new challenges to the debate between dual-process and single-process accounts, which are discussed.
Streamflow variability and optimal capacity of run-of-river hydropower plants
NASA Astrophysics Data System (ADS)
Basso, S.; Botter, G.
2012-10-01
The identification of the capacity of a run-of-river plant which allows for the optimal utilization of the available water resources is a challenging task, mainly because of the inherent temporal variability of river flows. This paper proposes an analytical framework to describe the energy production and the economic profitability of small run-of-river power plants on the basis of the underlying streamflow regime. We provide analytical expressions for the capacity which maximize the produced energy as a function of the underlying flow duration curve and minimum environmental flow requirements downstream of the plant intake. Similar analytical expressions are derived for the capacity which maximize the economic return deriving from construction and operation of a new plant. The analytical approach is applied to a minihydro plant recently proposed in a small Alpine catchment in northeastern Italy, evidencing the potential of the method as a flexible and simple design tool for practical application. The analytical model provides useful insight on the major hydrologic and economic controls (e.g., streamflow variability, energy price, costs) on the optimal plant capacity and helps in identifying policy strategies to reduce the current gap between the economic and energy optimizations of run-of-river plants.
Automated Trait Scores for "GRE"® Writing Tasks. Research Report. ETS RR-15-15
ERIC Educational Resources Information Center
Attali, Yigal; Sinharay, Sandip
2015-01-01
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Equation-free multiscale computation: algorithms and applications.
Kevrekidis, Ioannis G; Samaey, Giovanni
2009-01-01
In traditional physicochemical modeling, one derives evolution equations at the (macroscopic, coarse) scale of interest; these are used to perform a variety of tasks (simulation, bifurcation analysis, optimization) using an arsenal of analytical and numerical techniques. For many complex systems, however, although one observes evolution at a macroscopic scale of interest, accurate models are only given at a more detailed (fine-scale, microscopic) level of description (e.g., lattice Boltzmann, kinetic Monte Carlo, molecular dynamics). Here, we review a framework for computer-aided multiscale analysis, which enables macroscopic computational tasks (over extended spatiotemporal scales) using only appropriately initialized microscopic simulation on short time and length scales. The methodology bypasses the derivation of macroscopic evolution equations when these equations conceptually exist but are not available in closed form-hence the term equation-free. We selectively discuss basic algorithms and underlying principles and illustrate the approach through representative applications. We also discuss potential difficulties and outline areas for future research.
Information Tailoring Enhancements for Large Scale Social Data
2016-03-15
i.com) 1 Work Performed within This Reporting Period .................................................... 2 1.1 Implemented Temporal Analytics ...following tasks. Implemented Temporal Analysis Algorithms for Advanced Analytics in Scraawl. We implemented our backend web service design for the...temporal analysis and we created a prototyope GUI web service of Scraawl analytics dashboard. Upgraded Scraawl computational framework to increase
MIT CSAIL and Lincoln Laboratory Task Force Report
2016-08-01
projects have been very diverse, spanning several areas of CSAIL concentration, including robotics, big data analytics , wireless communications...spanning several areas of CSAIL concentration, including robotics, big data analytics , wireless communications, computing architectures and...to machine learning systems and algorithms, such as recommender systems, and “Big Data ” analytics . Advanced computing architectures broadly refer to
Facilitating an L2 Book Club: A Conversation-Analytic Study of Task Management
ERIC Educational Resources Information Center
Ro, Eunseok
2018-01-01
This study employs conversation analysis to examine a facilitator's interactional practices in the post-expansion phase of students' presentations in the context of a book club for second language learning. The analysis shows how the facilitator establishes intersubjectivity with regard to the ongoing task and manages students' task performance.…
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Use of evidence in a categorization task: analytic and holistic processing modes.
Greco, Alberto; Moretti, Stefania
2017-11-01
Category learning performance can be influenced by many contextual factors, but the effects of these factors are not the same for all learners. The present study suggests that these differences can be due to the different ways evidence is used, according to two main basic modalities of processing information, analytically or holistically. In order to test the impact of the information provided, an inductive rule-based task was designed, in which feature salience and comparison informativeness between examples of two categories were manipulated during the learning phases, by introducing and progressively reducing some perceptual biases. To gather data on processing modalities, we devised the Active Feature Composition task, a production task that does not require classifying new items but reproducing them by combining features. At the end, an explicit rating task was performed, which entailed assessing the accuracy of a set of possible categorization rules. A combined analysis of the data collected with these two different tests enabled profiling participants in regard to the kind of processing modality, the structure of representations and the quality of categorial judgments. Results showed that despite the fact that the information provided was the same for all participants, those who adopted analytic processing better exploited evidence and performed more accurately, whereas with holistic processing categorization is perfectly possible but inaccurate. Finally, the cognitive implications of the proposed procedure, with regard to involved processes and representations, are discussed.
NASA Technical Reports Server (NTRS)
Oberg, C. L.
1974-01-01
The combustion stability characteristics of engines applicable to the Space Shuttle Orbit Maneuvering System and the adequacy of acoustic cavities as a means of assuring stability in these engines were investigated. The study comprised full-scale stability rating tests, bench-scale acoustic model tests and analysis. Two series of stability rating tests were made. Acoustic model tests were made to determine the resonance characteristics and effects of acoustic cavities. Analytical studies were done to aid design of the cavity configurations to be tested and, also, to aid evaluation of the effectiveness of acoustic cavities from available test results.
Flat-plate solar array project. Volume 8: Project analysis and integration
NASA Technical Reports Server (NTRS)
Mcguire, P.; Henry, P.
1986-01-01
Project Analysis and Integration (PA&I) performed planning and integration activities to support management of the various Flat-Plate Solar Array (FSA) Project R&D activities. Technical and economic goals were established by PA&I for each R&D task within the project to coordinate the thrust toward the National Photovoltaic Program goals. A sophisticated computer modeling capability was developed to assess technical progress toward meeting the economic goals. These models included a manufacturing facility simulation, a photovoltaic power station simulation and a decision aid model incorporating uncertainty. This family of analysis tools was used to track the progress of the technology and to explore the effects of alternative technical paths. Numerous studies conducted by PA&I signaled the achievement of milestones or were the foundation of major FSA project and national program decisions. The most important PA&I activities during the project history are summarized. The PA&I planning function is discussed and how it relates to project direction and important analytical models developed by PA&I for its analytical and assessment activities are reviewed.
A Task Analytic Process to Define Future Concepts in Aviation
NASA Technical Reports Server (NTRS)
Gore, Brian Francis; Wolter, Cynthia A.
2014-01-01
A necessary step when developing next generation systems is to understand the tasks that operators will perform. One NextGen concept under evaluation termed Single Pilot Operations (SPO) is designed to improve the efficiency of airline operations. One SPO concept includes a Pilot on Board (PoB), a Ground Station Operator (GSO), and automation. A number of procedural changes are likely to result when such changes in roles and responsibilities are undertaken. Automation is expected to relieve the PoB and GSO of some tasks (e.g. radio frequency changes, loading expected arrival information). A major difference in the SPO environment is the shift to communication-cued crosschecks (verbal / automated) rather than movement-cued crosschecks that occur in a shared cockpit. The current article highlights a task analytic process of the roles and responsibilities between a PoB, an approach-phase GSO, and automation.
Assessment of Galileo modal test results for mathematical model verification
NASA Technical Reports Server (NTRS)
Trubert, M.
1984-01-01
The modal test program for the Galileo Spacecraft was completed at the Jet Propulsion Laboratory in the summer of 1983. The multiple sine dwell method was used for the baseline test. The Galileo Spacecraft is a rather complex 2433 kg structure made of a central core on which seven major appendages representing 30 percent of the total mass are attached, resulting in a high modal density structure. The test revealed a strong nonlinearity in several major modes. This nonlinearity discovered in the course of the test necessitated running additional tests at the unusually high response levels of up to about 21 g. The high levels of response were required to obtain a model verification valid at the level of loads for which the spacecraft was designed. Because of the high modal density and the nonlinearity, correlation between the dynamic mathematical model and the test results becomes a difficult task. Significant changes in the pre-test analytical model are necessary to establish confidence in the upgraded analytical model used for the final load verification. This verification, using a test verified model, is required by NASA to fly the Galileo Spacecraft on the Shuttle/Centaur launch vehicle in 1986.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ang; Song, Shuaiwen; Brugel, Eric
To continuously comply with Moore’s Law, modern parallel machines become increasingly complex. Effectively tuning application performance for these machines therefore becomes a daunting task. Moreover, identifying performance bottlenecks at application and architecture level, as well as evaluating various optimization strategies, are becoming extremely difficult when the entanglement of numerous correlated factors is being presented. To tackle these challenges, we present a visual analytical model named “X”. It is intuitive and sufficiently flexible to track all the typical features of a parallel machine.
Analytical Chemistry Laboratory
NASA Technical Reports Server (NTRS)
Anderson, Mark
2013-01-01
The Analytical Chemistry and Material Development Group maintains a capability in chemical analysis, materials R&D failure analysis and contamination control. The uniquely qualified staff and facility support the needs of flight projects, science instrument development and various technical tasks, as well as Cal Tech.
Inhalation exposure to jet fuel (JP8) among U.S. Air Force personnel.
Smith, Kristen W; Proctor, Susan P; Ozonoff, Al; McClean, Michael D
2010-10-01
As jet fuel is a common occupational exposure among military and civilian populations, this study was conducted to characterize jet fuel (JP8) exposure among active duty U.S. Air Force personnel. Personnel (n = 24) were divided a priori into high, moderate, and low exposure groups. Questionnaires and personal air samples (breathing zone) were collected from each worker over 3 consecutive days (72 worker-days) and analyzed for total hydrocarbons (THC), benzene, toluene, ethylbenzene, xylenes, and naphthalene. Air samples were collected from inside the fuel tank and analyzed for the same analytes. Linear mixed-effects models were used to evaluate the exposure data. Our results show that the correlation of THC (a measure of overall JP8 inhalation exposure) with all other analytes was moderate to strong in the a priori high and moderate exposure groups combined. Inhalation exposure to all analytes varied significantly by self-reported JP8 exposure (THC levels higher among workers reporting JP8 exposure), a priori exposure group (THC levels in high group > moderate group > low group), and more specific job task groupings (THC levels among workers in fuel systems hangar group > refueling maintenance group > fuel systems office group > fuel handling group > clinic group), with task groupings explaining the most between-worker variability. Among highly exposed workers, statistically significant job task-related predictors of inhalation exposure to THC indicated that increased time in the hangar, working close to the fuel tank (inside > less than 25 ft > greater than 25 ft), primary job (entrant > attendant/runner/fireguard > outside hangar), and performing various tasks near the fuel tank, such as searching for a leak, resulted in higher JP8 exposure. This study shows that while a priori exposure groups were useful in distinguishing JP8 exposure levels, job task-based categories should be considered in epidemiologic study designs to improve exposure classification. Finally, the strong correlation of THC with naphthalene suggests that naphthalene may be an appropriate surrogate of JP8 exposure. [Supplementary materials are available for this article. Go to the publisher's online edition of the Journal of Occupational and Environmental Hygiene for the following free supplemental resource: a pdf file containing a table detailing concentrations of JP8 components.].
NASA Astrophysics Data System (ADS)
Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris
2015-04-01
Many geoscience applications can benefit from testing many combinations of input parameters for geochemical simulation models. It is, however, a challenge to screen the input and output data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we propose a Visual Analytics approach that has been developed in an ongoing collaboration between computer science and geoscience researchers. Our Visual Analytics approach uses visualization methods of hierarchical horizontal axis, multi-factor stacked bar charts and interactive semi-automated filtering for input and output data together with automatic sensitivity analysis. This guides the users towards significant relationships. We implement our approach as an interactive data exploration tool. It is designed with flexibility in mind, so that a diverse set of tasks such as inverse modeling, sensitivity analysis and model parameter refinement can be supported. Here we demonstrate the capabilities of our approach by two examples for gas storage applications. For the first example our Visual Analytics approach enabled the analyst to observe how the element concentrations change around previously established baselines in response to thousands of different combinations of mineral phases. This supported combinatorial inverse modeling for interpreting observations about the chemical composition of the formation fluids at the Ketzin pilot site for CO2 storage. The results indicate that, within the experimental error range, the formation fluid cannot be considered at local thermodynamical equilibrium with the mineral assemblage of the reservoir rock. This is a valuable insight from the predictive geochemical modeling for the Ketzin site. For the second example our approach supports sensitivity analysis for a reaction involving the reductive dissolution of pyrite with formation of pyrrothite in presence of gaseous hydrogen. We determine that this reaction is thermodynamically favorable under a broad range of conditions. This includes low temperatures and absence of microbial catalysators. Our approach has potential for use in other applications that involve exploration of relationships in geochemical simulation model data.
Comparison of closed loop model with flight test results
NASA Technical Reports Server (NTRS)
George, F. L.
1981-01-01
An analytic technique capable of predicting the landing characteristics of proposed aircraft configurations in the early stages of design was developed. In this analysis, a linear pilot-aircraft closed loop model was evaluated using experimental data generated with the NT-33 variable stability in-flight simulator. The pilot dynamics are modeled as inner and outer servo loop closures around aircraft pitch attitude, and altitude rate-of-change respectively. The landing flare maneuver is of particular interest as recent experience with military and other highly augmented vehicles shows this task to be relatively demanding, and potentially a critical design point. A unique feature of the pilot model is the incorporation of an internal model of the pilot's desired flight path for the flare maneuver.
Defining Higher-Order Turbulent Moment Closures with an Artificial Neural Network and Random Forest
NASA Astrophysics Data System (ADS)
McGibbon, J.; Bretherton, C. S.
2017-12-01
Unresolved turbulent advection and clouds must be parameterized in atmospheric models. Modern higher-order closure schemes depend on analytic moment closure assumptions that diagnose higher-order moments in terms of lower-order ones. These are then tested against Large-Eddy Simulation (LES) higher-order moment relations. However, these relations may not be neatly analytic in nature. Rather than rely on an analytic higher-order moment closure, can we use machine learning on LES data itself to define a higher-order moment closure?We assess the ability of a deep artificial neural network (NN) and random forest (RF) to perform this task using a set of observationally-based LES runs from the MAGIC field campaign. By training on a subset of 12 simulations and testing on remaining simulations, we avoid over-fitting the training data.Performance of the NN and RF will be assessed and compared to the Analytic Double Gaussian 1 (ADG1) closure assumed by Cloudy Layers Unified By Binormals (CLUBB), a higher-order turbulence closure currently used in the Community Atmosphere Model (CAM). We will show that the RF outperforms the NN and the ADG1 closure for the MAGIC cases within this diagnostic framework. Progress and challenges in using a diagnostic machine learning closure within a prognostic cloud and turbulence parameterization will also be discussed.
A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models.
Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S; Wu, Xiaowei; Müller, Rolf
2018-01-01
Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design.
A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models
Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S.; Wu, Xiaowei; Müller, Rolf
2017-01-01
Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design. PMID:29749977
Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.
Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil
2016-10-01
We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.
Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP
Staras, Kevin
2016-01-01
We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture. PMID:27760125
Efficient Online Optimized Quantum Control for Adiabatic Quantum Computation
NASA Astrophysics Data System (ADS)
Quiroz, Gregory
Adiabatic quantum computation (AQC) relies on controlled adiabatic evolution to implement a quantum algorithm. While control evolution can take many forms, properly designed time-optimal control has been shown to be particularly advantageous for AQC. Grover's search algorithm is one such example where analytically-derived time-optimal control leads to improved scaling of the minimum energy gap between the ground state and first excited state and thus, the well-known quadratic quantum speedup. Analytical extensions beyond Grover's search algorithm present a daunting task that requires potentially intractable calculations of energy gaps and a significant degree of model certainty. Here, an in situ quantum control protocol is developed for AQC. The approach is shown to yield controls that approach the analytically-derived time-optimal controls for Grover's search algorithm. In addition, the protocol's convergence rate as a function of iteration number is shown to be essentially independent of system size. Thus, the approach is potentially scalable to many-qubit systems.
Analytical Tools to Improve Optimization Procedures for Lateral Flow Assays
Hsieh, Helen V.; Dantzler, Jeffrey L.; Weigl, Bernhard H.
2017-01-01
Immunochromatographic or lateral flow assays (LFAs) are inexpensive, easy to use, point-of-care medical diagnostic tests that are found in arenas ranging from a doctor’s office in Manhattan to a rural medical clinic in low resource settings. The simplicity in the LFA itself belies the complex task of optimization required to make the test sensitive, rapid and easy to use. Currently, the manufacturers develop LFAs by empirical optimization of material components (e.g., analytical membranes, conjugate pads and sample pads), biological reagents (e.g., antibodies, blocking reagents and buffers) and the design of delivery geometry. In this paper, we will review conventional optimization and then focus on the latter and outline analytical tools, such as dynamic light scattering and optical biosensors, as well as methods, such as microfluidic flow design and mechanistic models. We are applying these tools to find non-obvious optima of lateral flow assays for improved sensitivity, specificity and manufacturing robustness. PMID:28555034
The effect of analytic and experiential modes of thought on moral judgment.
Kvaran, Trevor; Nichols, Shaun; Sanfey, Alan
2013-01-01
According to dual-process theories, moral judgments are the result of two competing processes: a fast, automatic, affect-driven process and a slow, deliberative, reason-based process. Accordingly, these models make clear and testable predictions about the influence of each system. Although a small number of studies have attempted to examine each process independently in the context of moral judgment, no study has yet tried to experimentally manipulate both processes within a single study. In this chapter, a well-established "mode-of-thought" priming technique was used to place participants in either an experiential/emotional or analytic mode while completing a task in which participants provide judgments about a series of moral dilemmas. We predicted that individuals primed analytically would make more utilitarian responses than control participants, while emotional priming would lead to less utilitarian responses. Support was found for both of these predictions. Implications of these findings for dual-process theories of moral judgment will be discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
Cost and Schedule Analytical Techniques Development
NASA Technical Reports Server (NTRS)
1998-01-01
This Final Report summarizes the activities performed by Science Applications International Corporation (SAIC) under contract NAS 8-40431 "Cost and Schedule Analytical Techniques Development Contract" (CSATD) during Option Year 3 (December 1, 1997 through November 30, 1998). This Final Report is in compliance with Paragraph 5 of Section F of the contract. This CSATD contract provides technical products and deliverables in the form of parametric models, databases, methodologies, studies, and analyses to the NASA Marshall Space Flight Center's (MSFC) Engineering Cost Office (PP03) and the Program Plans and Requirements Office (PP02) and other user organizations. Detailed Monthly Reports were submitted to MSFC in accordance with the contract's Statement of Work, Section IV "Reporting and Documentation". These reports spelled out each month's specific work performed, deliverables submitted, major meetings conducted, and other pertinent information. Therefore, this Final Report will summarize these activities at a higher level. During this contract Option Year, SAIC expended 25,745 hours in the performance of tasks called out in the Statement of Work. This represents approximately 14 full-time EPs. Included are the Huntsville-based team, plus SAIC specialists in San Diego, Ames Research Center, Tampa, and Colorado Springs performing specific tasks for which they are uniquely qualified.
Development of the Biology Card Sorting Task to Measure Conceptual Expertise in Biology
Smith, Julia I.; Combs, Elijah D.; Nagami, Paul H.; Alto, Valerie M.; Goh, Henry G.; Gourdet, Muryam A. A.; Hough, Christina M.; Nickell, Ashley E.; Peer, Adrian G.; Coley, John D.; Tanner, Kimberly D.
2013-01-01
There are widespread aspirations to focus undergraduate biology education on teaching students to think conceptually like biologists; however, there is a dearth of assessment tools designed to measure progress from novice to expert biological conceptual thinking. We present the development of a novel assessment tool, the Biology Card Sorting Task, designed to probe how individuals organize their conceptual knowledge of biology. While modeled on tasks from cognitive psychology, this task is unique in its design to test two hypothesized conceptual frameworks for the organization of biological knowledge: 1) a surface feature organization focused on organism type and 2) a deep feature organization focused on fundamental biological concepts. In this initial investigation of the Biology Card Sorting Task, each of six analytical measures showed statistically significant differences when used to compare the card sorting results of putative biological experts (biology faculty) and novices (non–biology major undergraduates). Consistently, biology faculty appeared to sort based on hypothesized deep features, while non–biology majors appeared to sort based on either surface features or nonhypothesized organizational frameworks. Results suggest that this novel task is robust in distinguishing populations of biology experts and biology novices and may be an adaptable tool for tracking emerging biology conceptual expertise. PMID:24297290
Kepinska, Olga; Pereda, Ernesto; Caspers, Johanneke; Schiller, Niels O
2017-12-01
The goal of the present study was to investigate the initial phases of novel grammar learning on a neural level, concentrating on mechanisms responsible for individual variability between learners. Two groups of participants, one with high and one with average language analytical abilities, performed an Artificial Grammar Learning (AGL) task consisting of learning and test phases. During the task, EEG signals from 32 cap-mounted electrodes were recorded and epochs corresponding to the learning phases were analysed. We investigated spectral power modulations over time, and functional connectivity patterns by means of a bivariate, frequency-specific index of phase synchronization termed Phase Locking Value (PLV). Behavioural data showed learning effects in both groups, with a steeper learning curve and higher ultimate attainment for the highly skilled learners. Moreover, we established that cortical connectivity patterns and profiles of spectral power modulations over time differentiated L2 learners with various levels of language analytical abilities. Over the course of the task, the learning process seemed to be driven by whole-brain functional connectivity between neuronal assemblies achieved by means of communication in the beta band frequency. On a shorter time-scale, increasing proficiency on the AGL task appeared to be supported by stronger local synchronisation within the right hemisphere regions. Finally, we observed that the highly skilled learners might have exerted less mental effort, or reduced attention for the task at hand once the learning was achieved, as evidenced by the higher alpha band power. Copyright © 2017 Elsevier Inc. All rights reserved.
Haun, Jolie N; Nazi, Kim M; Chavez, Margeaux; Lind, Jason D; Antinori, Nicole; Gosline, Robert M; Martin, Tracey L
2015-02-27
The Department of Veterans Affairs (VA) has developed health information technologies (HIT) and resources to improve veteran access to health care programs and services, and to support a patient-centered approach to health care delivery. To improve VA HIT access and meaningful use by veterans, it is necessary to understand their preferences for interacting with various HIT resources to accomplish health management related tasks and to exchange information. The objective of this paper was to describe a novel protocol for: (1) developing a HIT Digital Health Matrix Model; (2) conducting an Analytic Hierarchy Process called pairwise comparison to understand how and why veterans want to use electronic health resources to complete tasks related to health management; and (3) developing visual modeling simulations that depict veterans' preferences for using VA HIT to manage their health conditions and exchange health information. The study uses participatory research methods to understand how veterans prefer to use VA HIT to accomplish health management tasks within a given context, and how they would like to interact with HIT interfaces (eg, look, feel, and function) in the future. This study includes two rounds of veteran focus groups with self-administered surveys and visual modeling simulation techniques. This study will also convene an expert panel to assist in the development of a VA HIT Digital Health Matrix Model, so that both expert panel members and veteran participants can complete an Analytic Hierarchy Process, pairwise comparisons to evaluate and rank the applicability of electronic health resources for a series of health management tasks. This protocol describes the iterative, participatory, and patient-centered process for: (1) developing a VA HIT Digital Health Matrix Model that outlines current VA patient-facing platforms available to veterans, describing their features and relevant contexts for use; and (2) developing visual model simulations based on direct veteran feedback that depict patient preferences for enhancing the synchronization, integration, and standardization of VA patient-facing platforms. Focus group topics include current uses, preferences, facilitators, and barriers to using electronic health resources; recommendations for synchronizing, integrating, and standardizing VA HIT; and preferences on data sharing and delegation within the VA system. This work highlights the practical, technological, and personal factors that facilitate and inhibit use of current VA HIT, and informs an integrated system redesign. The Digital Health Matrix Model and visual modeling simulations use knowledge of veteran preferences and experiences to directly inform enhancements to VA HIT and provide a more holistic and integrated user experience. These efforts are designed to support the adoption and sustained use of VA HIT to support patient self-management and clinical care coordination in ways that are directly aligned with veteran preferences.
Nazi, Kim M; Chavez, Margeaux; Lind, Jason D; Antinori, Nicole; Gosline, Robert M; Martin, Tracey L
2015-01-01
Background The Department of Veterans Affairs (VA) has developed health information technologies (HIT) and resources to improve veteran access to health care programs and services, and to support a patient-centered approach to health care delivery. To improve VA HIT access and meaningful use by veterans, it is necessary to understand their preferences for interacting with various HIT resources to accomplish health management related tasks and to exchange information. Objective The objective of this paper was to describe a novel protocol for: (1) developing a HIT Digital Health Matrix Model; (2) conducting an Analytic Hierarchy Process called pairwise comparison to understand how and why veterans want to use electronic health resources to complete tasks related to health management; and (3) developing visual modeling simulations that depict veterans’ preferences for using VA HIT to manage their health conditions and exchange health information. Methods The study uses participatory research methods to understand how veterans prefer to use VA HIT to accomplish health management tasks within a given context, and how they would like to interact with HIT interfaces (eg, look, feel, and function) in the future. This study includes two rounds of veteran focus groups with self-administered surveys and visual modeling simulation techniques. This study will also convene an expert panel to assist in the development of a VA HIT Digital Health Matrix Model, so that both expert panel members and veteran participants can complete an Analytic Hierarchy Process, pairwise comparisons to evaluate and rank the applicability of electronic health resources for a series of health management tasks. Results This protocol describes the iterative, participatory, and patient-centered process for: (1) developing a VA HIT Digital Health Matrix Model that outlines current VA patient-facing platforms available to veterans, describing their features and relevant contexts for use; and (2) developing visual model simulations based on direct veteran feedback that depict patient preferences for enhancing the synchronization, integration, and standardization of VA patient-facing platforms. Focus group topics include current uses, preferences, facilitators, and barriers to using electronic health resources; recommendations for synchronizing, integrating, and standardizing VA HIT; and preferences on data sharing and delegation within the VA system. Conclusions This work highlights the practical, technological, and personal factors that facilitate and inhibit use of current VA HIT, and informs an integrated system redesign. The Digital Health Matrix Model and visual modeling simulations use knowledge of veteran preferences and experiences to directly inform enhancements to VA HIT and provide a more holistic and integrated user experience. These efforts are designed to support the adoption and sustained use of VA HIT to support patient self-management and clinical care coordination in ways that are directly aligned with veteran preferences. PMID:25803324
Analytical results for a stochastic model of gene expression with arbitrary partitioning of proteins
NASA Astrophysics Data System (ADS)
Tschirhart, Hugo; Platini, Thierry
2018-05-01
In biophysics, the search for analytical solutions of stochastic models of cellular processes is often a challenging task. In recent work on models of gene expression, it was shown that a mapping based on partitioning of Poisson arrivals (PPA-mapping) can lead to exact solutions for previously unsolved problems. While the approach can be used in general when the model involves Poisson processes corresponding to creation or degradation, current applications of the method and new results derived using it have been limited to date. In this paper, we present the exact solution of a variation of the two-stage model of gene expression (with time dependent transition rates) describing the arbitrary partitioning of proteins. The methodology proposed makes full use of the PPA-mapping by transforming the original problem into a new process describing the evolution of three biological switches. Based on a succession of transformations, the method leads to a hierarchy of reduced models. We give an integral expression of the time dependent generating function as well as explicit results for the mean, variance, and correlation function. Finally, we discuss how results for time dependent parameters can be extended to the three-stage model and used to make inferences about models with parameter fluctuations induced by hidden stochastic variables.
Dynamic contraction behaviour of pneumatic artificial muscle
NASA Astrophysics Data System (ADS)
Doumit, Marc D.; Pardoel, Scott
2017-07-01
The development of a dynamic model for the Pneumatic Artificial Muscle (PAM) is an imperative undertaking for understanding and analyzing the behaviour of the PAM as a function of time. This paper proposes a Newtonian based dynamic PAM model that includes the modeling of the muscle geometry, force, inertia, fluid dynamic, static and dynamic friction, heat transfer and valve flow while ignoring the effect of bladder elasticity. This modeling contribution allows the designer to predict, analyze and optimize PAM performance prior to its development. Thus advancing successful implementations of PAM based powered exoskeletons and medical systems. To date, most muscle dynamic properties are determined experimentally, furthermore, no analytical models that can accurately predict the muscle's dynamic behaviour are found in the literature. Most developed analytical models adequately predict the muscle force in static cases but neglect the behaviour of the system in the transient response. This could be attributed to the highly challenging task of deriving such a dynamic model given the number of system elements that need to be identified and the system's highly non-linear properties. The proposed dynamic model in this paper is successfully simulated through MATLAB programing and validated the pressure, contraction distance and muscle temperature with experimental testing that is conducted with in-house built prototype PAM's.
Firing the Executive: When an Analytic Approach to Problem Solving Helps and Hurts
ERIC Educational Resources Information Center
Aiello, Daniel A.; Jarosz, Andrew F.; Cushen, Patrick J.; Wiley, Jennifer
2012-01-01
There is a general assumption that a more controlled or more focused attentional state is beneficial for most cognitive tasks. However, there has been a growing realization that creative problem solving tasks, such as the Remote Associates Task (RAT), may benefit from a less controlled solution approach. To test this hypothesis, in a 2x2 design,…
NASA Astrophysics Data System (ADS)
Dimakogianni, M.; Simserides, C.; Triberis, G. P.
2013-07-01
We introduce a theoretical model to scrutinize the conductivity of small polarons in 1D disordered systems, focusing on two crucial - as will be demonstrated - factors: the density of states and the spatial extent of the electronic wave function. The investigation is performed for any temperature up to 300 K and under electric field of arbitrary strength up to the polaron dissociation limit. To accomplish this task, we combine analytical work with numerical calculations.
Development of Analytical Systems for Evaluation of US Reconstitution and Recovery Programs.
1980-09-01
Program Evaluation Economic M4odels US Economy ’MABB"ACT (Cort~at m~ Mae @0b neamv md kavily by block numbr) ~This study identifies economic models and...planning tasks Are more complex and difficult than those faced by planners In the post s era. Also, because of those same factors and that the 1980s...comparative analysis outlined in the second study , while also concerned with the accomplishment of societal objectives, is somewhat different. The approach
Pulsed Lidar Performance/Technical Maturity Assessment
NASA Technical Reports Server (NTRS)
Gimmestad, Gary G.; West, Leanne L.; Wood, Jack W.; Frehlich, Rod
2004-01-01
This report describes the results of investigations performed by the Georgia Tech Research Institute (GTRI) and the National Center for Atmospheric Research (NCAR) under a task entitled 'Pulsed Lidar Performance/Technical Maturity Assessment' funded by the Crew Systems Branch of the Airborne Systems Competency at the NASA Langley Research Center. The investigations included two tasks, 1.1(a) and 1.1(b). The Tasks discussed in this report are in support of the NASA Virtual Airspace Modeling and Simulation (VAMS) program and are designed to evaluate a pulsed lidar that will be required for active wake vortex avoidance solutions. The Coherent Technologies, Inc. (CTI) WindTracer LIDAR is an eye-safe, 2-micron, coherent, pulsed Doppler lidar with wake tracking capability. The actual performance of the WindTracer system was to be quantified. In addition, the sensor performance has been assessed and modeled, and the models have been included in simulation efforts. The WindTracer LIDAR was purchased by the Federal Aviation Administration (FAA) for use in near-term field data collection efforts as part of a joint NASA/FAA wake vortex research program. In the joint research program, a minimum common wake and weather data collection platform will be defined. NASA Langley will use the field data to support wake model development and operational concept investigation in support of the VAMS project, where the ultimate goal is to improve airport capacity and safety. Task 1.1(a), performed by NCAR in Boulder, Colorado to analyze the lidar system to determine its performance and capabilities based on results from simulated lidar data with analytic wake vortex models provided by NASA, which were then compared to the vendor's claims for the operational specifications of the lidar. Task 1.1(a) is described in Section 3, including the vortex model, lidar parameters and simulations, and results for both detection and tracking of wake vortices generated by Boeing 737s and 747s. Task 1.1(b) was performed by GTRI in Atlanta, Georgia and is described in Section 4. Task 1.1(b) includes a description of the St. Louis Airport (STL) field test being conducted by the Volpe National Transportation Systems Center, and it also addresses the development of a test plan to validate simulation studies conducted as part of Task 1.1(a). Section 4.2 provides a description of the Volpe STL field tests, and Section 4.3 describes 3 possible ways to validate the WindTracer lidar simulations performed in Task 1.1(a).
Sugden, Nicole A; Marquis, Alexandra R
2017-11-01
Infants show facility for discriminating between individual faces within hours of birth. Over the first year of life, infants' face discrimination shows continued improvement with familiar face types, such as own-race faces, but not with unfamiliar face types, like other-race faces. The goal of this meta-analytic review is to provide an effect size for infants' face discrimination ability overall, with own-race faces, and with other-race faces within the first year of life, how this differs with age, and how it is influenced by task methodology. Inclusion criteria were (a) infant participants aged 0 to 12 months, (b) completing a human own- or other-race face discrimination task, (c) with discrimination being determined by infant looking. Our analysis included 30 works (165 samples, 1,926 participants participated in 2,623 tasks). The effect size for infants' face discrimination was small, 6.53% greater than chance (i.e., equal looking to the novel and familiar). There was a significant difference in discrimination by race, overall (own-race, 8.18%; other-race, 3.18%) and between ages (own-race: 0- to 4.5-month-olds, 7.32%; 5- to 7.5-month-olds, 9.17%; and 8- to 12-month-olds, 7.68%; other-race: 0- to 4.5-month-olds, 6.12%; 5- to 7.5-month-olds, 3.70%; and 8- to 12-month-olds, 2.79%). Multilevel linear (mixed-effects) models were used to predict face discrimination; infants' capacity to discriminate faces is sensitive to face characteristics including race, gender, and emotion as well as the methods used, including task timing, coding method, and visual angle. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Controlled English to facilitate human/machine analytical processing
NASA Astrophysics Data System (ADS)
Braines, Dave; Mott, David; Laws, Simon; de Mel, Geeth; Pham, Tien
2013-06-01
Controlled English is a human-readable information representation format that is implemented using a restricted subset of the English language, but which is unambiguous and directly accessible by simple machine processes. We have been researching the capabilities of CE in a number of contexts, and exploring the degree to which a flexible and more human-friendly information representation format could aid the intelligence analyst in a multi-agent collaborative operational environment; especially in cases where the agents are a mixture of other human users and machine processes aimed at assisting the human users. CE itself is built upon a formal logic basis, but allows users to easily specify models for a domain of interest in a human-friendly language. In our research we have been developing an experimental component known as the "CE Store" in which CE information can be quickly and flexibly processed and shared between human and machine agents. The CE Store environment contains a number of specialized machine agents for common processing tasks and also supports execution of logical inference rules that can be defined in the same CE language. This paper outlines the basic architecture of this approach, discusses some of the example machine agents that have been developed, and provides some typical examples of the CE language and the way in which it has been used to support complex analytical tasks on synthetic data sources. We highlight the fusion of human and machine processing supported through the use of the CE language and CE Store environment, and show this environment with examples of highly dynamic extensions to the model(s) and integration between different user-defined models in a collaborative setting.
NASA Astrophysics Data System (ADS)
Abedi Gheshlaghi, Hassan; Feizizadeh, Bakhtiar
2017-09-01
Landslides in mountainous areas render major damages to residential areas, roads, and farmlands. Hence, one of the basic measures to reduce the possible damage is by identifying landslide-prone areas through landslide mapping by different models and methods. The purpose of conducting this study is to evaluate the efficacy of a combination of two models of the analytical network process (ANP) and fuzzy logic in landslide risk mapping in the Azarshahr Chay basin in northwest Iran. After field investigations and a review of research literature, factors affecting the occurrence of landslides including slope, slope aspect, altitude, lithology, land use, vegetation density, rainfall, distance to fault, distance to roads, distance to rivers, along with a map of the distribution of occurred landslides were prepared in GIS environment. Then, fuzzy logic was used for weighting sub-criteria, and the ANP was applied to weight the criteria. Next, they were integrated based on GIS spatial analysis methods and the landslide risk map was produced. Evaluating the results of this study by using receiver operating characteristic curves shows that the hybrid model designed by areas under the curve 0.815 has good accuracy. Also, according to the prepared map, a total of 23.22% of the area, amounting to 105.38 km2, is in the high and very high-risk class. Results of this research are great of importance for regional planning tasks and the landslide prediction map can be used for spatial planning tasks and for the mitigation of future hazards in the study area.
Multiplicative Multitask Feature Learning
Wang, Xin; Bi, Jinbo; Yu, Shipeng; Sun, Jiangwen; Song, Minghu
2016-01-01
We investigate a general framework of multiplicative multitask feature learning which decomposes individual task’s model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods can be proved to be special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effects of different regularizers. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. An efficient blockwise coordinate descent algorithm is developed suitable for solving the entire family of formulations with rigorous convergence analysis. Simulation studies have identified the statistical properties of data that would be in favor of the new formulations. Extensive empirical studies on various classification and regression benchmark data sets have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks. PMID:28428735
In conflict with ourselves? An investigation of heuristic and analytic processes in decision making.
Bonner, Carissa; Newell, Ben R
2010-03-01
Many theorists propose two types of processing: heuristic and analytic. In conflict tasks, in which these processing types lead to opposing responses, giving the analytic response may require both detection and resolution of the conflict. The ratio bias task, in which people tend to treat larger numbered ratios (e.g., 20/100) as indicating a higher likelihood of winning than do equivalent smaller numbered ratios (e.g., 2/10), is considered to induce such a conflict. Experiment 1 showed response time differences associated with conflict detection, resolution, and the amount of conflict induced. The conflict detection and resolution effects were replicated in Experiment 2 and were not affected by decreasing the influence of the heuristic response or decreasing the capacity to make the analytic response. The results are consistent with dual-process accounts, but a single-process account in which quantitative, rather than qualitative, differences in processing are assumed fares equally well in explaining the data.
Soto, Axel J; Zerva, Chrysoula; Batista-Navarro, Riza; Ananiadou, Sophia
2018-04-15
Pathway models are valuable resources that help us understand the various mechanisms underpinning complex biological processes. Their curation is typically carried out through manual inspection of published scientific literature to find information relevant to a model, which is a laborious and knowledge-intensive task. Furthermore, models curated manually cannot be easily updated and maintained with new evidence extracted from the literature without automated support. We have developed LitPathExplorer, a visual text analytics tool that integrates advanced text mining, semi-supervised learning and interactive visualization, to facilitate the exploration and analysis of pathway models using statements (i.e. events) extracted automatically from the literature and organized according to levels of confidence. LitPathExplorer supports pathway modellers and curators alike by: (i) extracting events from the literature that corroborate existing models with evidence; (ii) discovering new events which can update models; and (iii) providing a confidence value for each event that is automatically computed based on linguistic features and article metadata. Our evaluation of event extraction showed a precision of 89% and a recall of 71%. Evaluation of our confidence measure, when used for ranking sampled events, showed an average precision ranging between 61 and 73%, which can be improved to 95% when the user is involved in the semi-supervised learning process. Qualitative evaluation using pair analytics based on the feedback of three domain experts confirmed the utility of our tool within the context of pathway model exploration. LitPathExplorer is available at http://nactem.ac.uk/LitPathExplorer_BI/. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online.
Mirel, Barbara; Eichinger, Felix; Keller, Benjamin J; Kretzler, Matthias
2011-03-21
Bioinformatics visualization tools are often not robust enough to support biomedical specialists’ complex exploratory analyses. Tools need to accommodate the workflows that scientists actually perform for specific translational research questions. To understand and model one of these workflows, we conducted a case-based, cognitive task analysis of a biomedical specialist’s exploratory workflow for the question: What functional interactions among gene products of high throughput expression data suggest previously unknown mechanisms of a disease? From our cognitive task analysis four complementary representations of the targeted workflow were developed. They include: usage scenarios, flow diagrams, a cognitive task taxonomy, and a mapping between cognitive tasks and user-centered visualization requirements. The representations capture the flows of cognitive tasks that led a biomedical specialist to inferences critical to hypothesizing. We created representations at levels of detail that could strategically guide visualization development, and we confirmed this by making a trial prototype based on user requirements for a small portion of the workflow. Our results imply that visualizations should make available to scientific users “bundles of features†consonant with the compositional cognitive tasks purposefully enacted at specific points in the workflow. We also highlight certain aspects of visualizations that: (a) need more built-in flexibility; (b) are critical for negotiating meaning; and (c) are necessary for essential metacognitive support.
DEVELOPMENT OF AN IMPROVED SIMULATOR FOR CHEMICAL AND MICROBIAL IOR METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gary A. Pope; Kamy Sepehrnoori; Mojdeh Delshad
2001-10-01
This is the final report of a three-year research project on further development of a chemical and microbial improved oil recovery reservoir simulator. The objective of this research was to extend the capability of an existing simulator (UTCHEM) to improved oil recovery methods which use surfactants, polymers, gels, alkaline chemicals, microorganisms and foam as well as various combinations of these in both conventional and naturally fractured oil reservoirs. The first task was the addition of a dual-porosity model for chemical IOR in naturally fractured oil reservoirs. They formulated and implemented a multiphase, multicomponent dual porosity model for enhanced oil recoverymore » from naturally fractured reservoirs. The multiphase dual porosity model was tested against analytical solutions, coreflood data, and commercial simulators. The second task was the addition of a foam model. They implemented a semi-empirical surfactant/foam model in UTCHEM and validated the foam model by comparison with published laboratory data. The third task addressed several numerical and coding enhancements that will greatly improve its versatility and performance. Major enhancements were made in UTCHEM output files and memory management. A graphical user interface to set up the simulation input and to process the output data on a Windows PC was developed. New solvers for solving the pressure equation and geochemical system of equations were implemented and tested. A corner point grid geometry option for gridding complex reservoirs was implemented and tested. Enhancements of physical property models for both chemical and microbial IOR simulations were included in the final task of this proposal. Additional options for calculating the physical properties such as relative permeability and capillary pressure were added. A microbiological population model was developed and incorporated into UTCHEM. They have applied the model to microbial enhanced oil recovery (MEOR) processes by including the capability of permeability reduction due to biomass growth and retention. The formations of bio-products such as surfactant and polymer surfactant have also been incorporated.« less
The effect of methylphenidate and rearing environment on behavioral inhibition in adult male rats.
Hill, Jade C; Covarrubias, Pablo; Terry, Joel; Sanabria, Federico
2012-01-01
The ability to withhold reinforced responses-behavioral inhibition-is impaired in various psychiatric conditions including Attention Deficit Hyperactivity Disorder (ADHD). Methodological and analytical limitations have constrained our understanding of the effects of pharmacological and environmental factors on behavioral inhibition. To determine the effects of acute methylphenidate (MPH) administration and rearing conditions (isolated vs. pair-housed) on behavioral inhibition in adult rats. Inhibitory capacity was evaluated using two response-withholding tasks, differential reinforcement of low rates (DRL) and fixed minimum interval (FMI) schedules of reinforcement. Both tasks made sugar pellets contingent on intervals longer than 6 s between consecutive responses. Inferences on inhibitory and timing capacities were drawn from the distribution of withholding times (interresponse times, or IRTs). MPH increased the number of intervals produced in both tasks. Estimates of behavioral inhibition increased with MPH dose in FMI and with social isolation in DRL. Nonetheless, burst responding in DRL and the divergence of DRL data relative to past studies, among other limitations, undermined the reliability of DRL data as the basis for inferences on behavioral inhibition. Inhibitory capacity was more precisely estimated from FMI than from DRL performance. Based on FMI data, MPH, but not a socially enriched environment, appears to improve inhibitory capacity. The highest dose of MPH tested, 8 mg/kg, did not reduce inhibitory capacity but reduced the responsiveness to waiting contingencies. These results support the use of the FMI schedule, complemented with appropriate analytic techniques, for the assessment of behavioral inhibition in animal models.
NASA Technical Reports Server (NTRS)
Nathan, Terrence R.; Yarger, Douglas N.
1989-01-01
The research is comprised of the following tasks: use of simple analytical and numerical models of a coupled troposphere-stratosphere system to examine the effects of radiation and ozone on planetary wave dynamics and the tropospheric circulation; use of satellite data obtained from the Nimbus 7 Limb Infrared Monitor of the Stratosphere (LIMS) instrument and Solar Backscattered Ultraviolet (SBUV) experiment, in conjunction with National Meteorological Center (NMC) data, to determine the planetary wave vertical structures, dominant wave spectra, ozone spectra, and time variations in diabatic heating rate; and synthesis of the modeling and observational results to provide a better understanding of the effects that stratospheric processes have on tropospheric dynamics.
NASA Technical Reports Server (NTRS)
Noll, Thomas E.; Perry, Boyd, III; Tiffany, Sherwood H.; Cole, Stanley R.; Buttrill, Carey S.; Adams, William M., Jr.; Houck, Jacob A.; Srinathkumar, S.; Mukhopadhyay, Vivek; Pototzky, Anthony S.
1989-01-01
The status of the joint NASA/Rockwell Active Flexible Wing Wind-Tunnel Test Program is described. The objectives are to develop and validate the analysis, design, and test methodologies required to apply multifunction active control technology for improving aircraft performance and stability. Major tasks include designing digital multi-input/multi-output flutter-suppression and rolling-maneuver-load alleviation concepts for a flexible full-span wind-tunnel model, obtaining an experimental data base for the basic model and each control concept and providing comparisons between experimental and analytical results to validate the methodologies. The opportunity is provided to improve real-time simulation techniques and to gain practical experience with digital control law implementation procedures.
One-dimensional model and solutions for creeping gas flows in the approximation of uniform pressure
NASA Astrophysics Data System (ADS)
Vedernikov, A.; Balapanov, D.
2016-11-01
A model, along with analytical and numerical solutions, is presented to describe a wide variety of one-dimensional slow flows of compressible heat-conductive fluids. The model is based on the approximation of uniform pressure valid for the flows, in which the sound propagation time is much shorter than the duration of any meaningful density variation in the system. The energy balance is described by the heat equation that is solved independently. This approach enables the explicit solution for the fluid velocity to be obtained. Interfacial and volumetric heat and mass sources as well as boundary motion are considered as possible sources of density variation in the fluid. A set of particular tasks is analyzed for different motion sources in planar, axial, and central symmetries in the quasistationary limit of heat conduction (i.e., for large Fourier number). The analytical solutions are in excellent agreement with corresponding numerical solutions of the whole system of the Navier-Stokes equations. This work deals with the ideal gas. The approach is also valid for other equations of state.
Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang
2016-01-01
It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894
Visual Word Recognition Across the Adult Lifespan
Cohen-Shikora, Emily R.; Balota, David A.
2016-01-01
The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwang Y. Lee; Stuart S. Yin; Andre Boheman
2004-12-26
The objective of the proposed work is to develop an intelligent distributed fiber optical sensor system for real-time monitoring of high temperature in a boiler furnace in power plants. Of particular interest is the estimation of spatial and temporal distributions of high temperatures within a boiler furnace, which will be essential in assessing and controlling the mechanisms that form and remove pollutants at the source, such as NOx. The basic approach in developing the proposed sensor system is three fold: (1) development of high temperature distributed fiber optical sensor capable of measuring temperatures greater than 2000 C degree with spatialmore » resolution of less than 1 cm; (2) development of distributed parameter system (DPS) models to map the three-dimensional (3D) temperature distribution for the furnace; and (3) development of an intelligent monitoring system for real-time monitoring of the 3D boiler temperature distribution. Under Task 1, improvement was made on the performance of in-fiber grating fabricated in single crystal sapphire fibers, test was performed on the grating performance of single crystal sapphire fiber with new fabrication methods, and the fabricated grating was applied to high temperature sensor. Under Task 2, models obtained from 3-D modeling of the Demonstration Boiler were used to study relationships between temperature and NOx, as the multi-dimensionality of such systems are most comparable with real-life boiler systems. Studies show that in boiler systems with no swirl, the distributed temperature sensor may provide information sufficient to predict trends of NOx at the boiler exit. Under Task 3, we investigate a mathematical approach to extrapolation of the temperature distribution within a power plant boiler facility, using a combination of a modified neural network architecture and semigroup theory. The 3D temperature data is furnished by the Penn State Energy Institute using FLUENT. Given a set of empirical data with no analytic expression, we first develop an analytic description and then extend that model along a single axis.« less
What every teacher needs to know about clinical reasoning.
Eva, Kevin W
2005-01-01
One of the core tasks assigned to clinical teachers is to enable students to sort through a cluster of features presented by a patient and accurately assign a diagnostic label, with the development of an appropriate treatment strategy being the end goal. Over the last 30 years there has been considerable debate within the health sciences education literature regarding the model that best describes how expert clinicians generate diagnostic decisions. The purpose of this essay is to provide a review of the research literature on clinical reasoning for frontline clinical teachers. The strengths and weaknesses of different approaches to clinical reasoning will be examined using one of the core divides between various models (that of analytic (i.e. conscious/controlled) versus non-analytic (i.e. unconscious/automatic) reasoning strategies) as an orienting framework. Recent work suggests that clinical teachers should stress the importance of both forms of reasoning, thereby enabling students to marshal reasoning processes in a flexible and context-specific manner. Specific implications are drawn from this overview for clinical teachers.
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.
2017-10-01
Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.
NASA Astrophysics Data System (ADS)
Kryjevskaia, Mila; Stetzer, MacKenzie R.; Grosz, Nathaniel
2014-12-01
We have applied the heuristic-analytic theory of reasoning to interpret inconsistencies in student reasoning approaches to physics problems. This study was motivated by an emerging body of evidence that suggests that student conceptual and reasoning competence demonstrated on one task often fails to be exhibited on another. Indeed, even after instruction specifically designed to address student conceptual and reasoning difficulties identified by rigorous research, many undergraduate physics students fail to build reasoning chains from fundamental principles even though they possess the required knowledge and skills to do so. Instead, they often rely on a variety of intuitive reasoning strategies. In this study, we developed and employed a methodology that allowed for the disentanglement of student conceptual understanding and reasoning approaches through the use of sequences of related questions. We have shown that the heuristic-analytic theory of reasoning can be used to account for, in a mechanistic fashion, the observed inconsistencies in student responses. In particular, we found that students tended to apply their correct ideas in a selective manner that supported a specific and likely anticipated conclusion while neglecting to employ the same ideas to refute an erroneous intuitive conclusion. The observed reasoning patterns were consistent with the heuristic-analytic theory, according to which reasoners develop a "first-impression" mental model and then construct an argument in support of the answer suggested by this model. We discuss implications for instruction and argue that efforts to improve student metacognition, which serves to regulate the interaction between intuitive and analytical reasoning, is likely to lead to improved student reasoning.
NASA Astrophysics Data System (ADS)
Asami, Koji
2010-12-01
There are a few concerns in dielectric modeling of biological cells by the finite-element method (FEM) to simulate their dielectric spectra. Cells possess thin plasma membranes and membrane-bound intracellular organelles, requiring extra fine meshes and considerable computational tasks in the simulation. To solve the problems, the “thin-layer” approximation (TLA) and the “effective medium” approximation (EMA) were adopted. TLA deals with the membrane as an interface of the specific membrane impedance, and therefore it is not necessary to divide the membrane region. EMA regards the composite cytoplasm as an effective homogeneous phase whose dielectric properties are calculated separately. It was proved that TLA and EMA were both useful for greatly reducing computational tasks while accurately coinciding with analytical solutions.
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.
2015-04-01
Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.
Modelling gait transition in two-legged animals
NASA Astrophysics Data System (ADS)
Pinto, Carla M. A.; Santos, Alexandra P.
2011-12-01
The study of locomotor patterns has been a major research goal in the last decades. Understanding how intralimb and interlimb coordination works out so well in animals' locomotion is a hard and challenging task. Many models have been proposed to model animal's rhythms. These models have also been applied to the control of rhythmic movements of adaptive legged robots, namely biped, quadruped and other designs. In this paper we study gait transition in a central pattern generator (CPG) model for bipeds, the 4-cells model. This model is proposed by Golubitsky, Stewart, Buono and Collins and is studied further by Pinto and Golubitsky. We briefly resume the work done by Pinto and Golubitsky. We compute numerically gait transition in the 4-cells CPG model for bipeds. We use Morris-Lecar equations and Wilson-Cowan equations as the internal dynamics for each cell. We also consider two types of coupling between the cells: diffusive and synaptic. We obtain secondary gaits by bifurcation of primary gaits, by varying the coupling strengths. Nevertheless, some bifurcating branches could not be obtained, emphasizing the fact that despite analytically those bifurcations exist, finding them is a hard task and requires variation of other parameters of the equations. We note that the type of coupling did not influence the results.
Cultivating Institutional Capacities for Learning Analytics
ERIC Educational Resources Information Center
Lonn, Steven; McKay, Timothy A.; Teasley, Stephanie D.
2017-01-01
This chapter details the process the University of Michigan developed to build institutional capacity for learning analytics. A symposium series, faculty task force, fellows program, research grants, and other initiatives are discussed, with lessons learned for future efforts and how other institutions might adapt such efforts to spur cultural…
Using Learning Analytics to Support Engagement in Collaborative Writing
ERIC Educational Resources Information Center
Liu, Ming; Pardo, Abelardo; Liu, Li
2017-01-01
Online collaborative writing tools provide an efficient way to complete a writing task. However, existing tools only focus on technological affordances and ignore the importance of social affordances in a collaborative learning environment. This article describes a learning analytic system that analyzes writing behaviors, and creates…
NASA Astrophysics Data System (ADS)
Broman, Karolina; Bernholt, Sascha; Parchmann, Ilka
2015-05-01
Background:Context-based learning approaches are used to enhance students' interest in, and knowledge about, science. According to different empirical studies, students' interest is improved by applying these more non-conventional approaches, while effects on learning outcomes are less coherent. Hence, further insights are needed into the structure of context-based problems in comparison to traditional problems, and into students' problem-solving strategies. Therefore, a suitable framework is necessary, both for the analysis of tasks and strategies. Purpose:The aim of this paper is to explore traditional and context-based tasks as well as students' responses to exemplary tasks to identify a suitable framework for future design and analyses of context-based problems. The paper discusses different established frameworks and applies the Higher-Order Cognitive Skills/Lower-Order Cognitive Skills (HOCS/LOCS) taxonomy and the Model of Hierarchical Complexity in Chemistry (MHC-C) to analyse traditional tasks and students' responses. Sample:Upper secondary students (n=236) at the Natural Science Programme, i.e. possible future scientists, are investigated to explore learning outcomes when they solve chemistry tasks, both more conventional as well as context-based chemistry problems. Design and methods:A typical chemistry examination test has been analysed, first the test items in themselves (n=36), and thereafter 236 students' responses to one representative context-based problem. Content analysis using HOCS/LOCS and MHC-C frameworks has been applied to analyse both quantitative and qualitative data, allowing us to describe different problem-solving strategies. Results:The empirical results show that both frameworks are suitable to identify students' strategies, mainly focusing on recall of memorized facts when solving chemistry test items. Almost all test items were also assessing lower order thinking. The combination of frameworks with the chemistry syllabus has been found successful to analyse both the test items as well as students' responses in a systematic way. The framework can therefore be applied in the design of new tasks, the analysis and assessment of students' responses, and as a tool for teachers to scaffold students in their problem-solving process. Conclusions:This paper gives implications for practice and for future research to both develop new context-based problems in a structured way, as well as providing analytical tools for investigating students' higher order thinking in their responses to these tasks.
A Common Foundation of Information and Analytical Capability for AFSPC Decision Making
2005-06-23
System Strategic Master Plan MAPs/MSP CRRAAF TASK FORCE CONOPS MUA Task Weights Engagement Analysis ASIIS Optimization ACEIT COST Analysis...Engangement Architecture Analysis Architecture MUA AFSPC POM S&T Planning Military Utility Analysis ACEIT COST Analysis Joint Capab Integ Develop System
Research on the use of space resources
NASA Technical Reports Server (NTRS)
Carroll, W. F. (Editor)
1983-01-01
The second year of a multiyear research program on the processing and use of extraterrestrial resources is covered. The research tasks included: (1) silicate processing, (2) magma electrolysis, (3) vapor phase reduction, and (4) metals separation. Concomitant studies included: (1) energy systems, (2) transportation systems, (3) utilization analysis, and (4) resource exploration missions. Emphasis in fiscal year 1982 was placed on the magma electrolysis and vapor phase reduction processes (both analytical and experimental) for separation of oxygen and metals from lunar regolith. The early experimental work on magma electrolysis resulted in gram quantities of iron (mixed metals) and the identification of significant anode, cathode, and container problems. In the vapor phase reduction tasks a detailed analysis of various process concepts led to the selection of two specific processes designated as ""Vapor Separation'' and ""Selective Ionization.'' Experimental work was deferred to fiscal year 1983. In the Silicate Processing task a thermophysical model of the casting process was developed and used to study the effect of variations in material properties on the cooling behavior of lunar basalt.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kristin A.; Scholtz, Jean; Whiting, Mark A.
The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. This paper details the various types of evaluations that may be conducted using the Repository information. Inmore » this paper we describe how developers can do informal evaluations of various aspects of their visual analytics environments using VAST Challenge information. Aspects that can be evaluated include the appropriateness of the software for various tasks, the various data types and formats that can be accommodated, the effectiveness and efficiency of the process supported by the software, and the intuitiveness of the visualizations and interactions. Researchers can compare their visualizations and interactions to those submitted to determine novelty. In addition, the paper provides pointers to various guidelines that software teams can use to evaluate the usability of their software. While these evaluations are not a replacement for formal evaluation methods, this information can be extremely useful during the development of visual analytics environments.« less
Improving accuracy and power with transfer learning using a meta-analytic database.
Schwartz, Yannick; Varoquaux, Gaël; Pallier, Christophe; Pinel, Philippe; Poline, Jean-Baptiste; Thirion, Bertrand
2012-01-01
Typical cohorts in brain imaging studies are not large enough for systematic testing of all the information contained in the images. To build testable working hypotheses, investigators thus rely on analysis of previous work, sometimes formalized in a so-called meta-analysis. In brain imaging, this approach underlies the specification of regions of interest (ROIs) that are usually selected on the basis of the coordinates of previously detected effects. In this paper, we propose to use a database of images, rather than coordinates, and frame the problem as transfer learning: learning a discriminant model on a reference task to apply it to a different but related new task. To facilitate statistical analysis of small cohorts, we use a sparse discriminant model that selects predictive voxels on the reference task and thus provides a principled procedure to define ROIs. The benefits of our approach are twofold. First it uses the reference database for prediction, i.e., to provide potential biomarkers in a clinical setting. Second it increases statistical power on the new task. We demonstrate on a set of 18 pairs of functional MRI experimental conditions that our approach gives good prediction. In addition, on a specific transfer situation involving different scanners at different locations, we show that voxel selection based on transfer learning leads to higher detection power on small cohorts.
Handling Qualities of Large Flexible Aircraft. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Poopaka, S.
1980-01-01
The effects on handling qualities of elastic modes interaction with the rigid body dynamics of a large flexible aircraft are studied by a mathematical computer simulation. An analytical method to predict the pilot ratings when there is a severe modes interactions is developed. This is done by extending the optimal control model of the human pilot response to include the mode decomposition mechanism into the model. The handling qualities are determined for a longitudinal tracking task using a large flexible aircraft with parametric variations in the undamped natural frequencies of the two lowest frequency, symmetric elastic modes made to induce varying amounts of mode interaction.
Validation of a finite element method framework for cardiac mechanics applications
NASA Astrophysics Data System (ADS)
Danan, David; Le Rolle, Virginie; Hubert, Arnaud; Galli, Elena; Bernard, Anne; Donal, Erwan; Hernández, Alfredo I.
2017-11-01
Modeling cardiac mechanics is a particularly challenging task, mainly because of the poor understanding of the underlying physiology, the lack of observability and the complexity of the mechanical properties of myocardial tissues. The choice of cardiac mechanic solvers, especially, implies several difficulties, notably due to the potential instability arising from the nonlinearities inherent to the large deformation framework. Furthermore, the verification of the obtained simulations is a difficult task because there is no analytic solutions for these kinds of problems. Hence, the objective of this work is to provide a quantitative verification of a cardiac mechanics implementation based on two published benchmark problems. The first problem consists in deforming a bar whereas the second problem concerns the inflation of a truncated ellipsoid-shaped ventricle, both in the steady state case. Simulations were obtained by using the finite element software GETFEM++. Results were compared to the consensus solution published by 11 groups and the proposed solutions were indistinguishable. The validation of the proposed mechanical model implementation is an important step toward the proposition of a global model of cardiac electro-mechanical activity.
Space shuttle flying qualities and criteria assessment
NASA Technical Reports Server (NTRS)
Myers, T. T.; Johnston, D. E.; Mcruer, Duane T.
1987-01-01
Work accomplished under a series of study tasks for the Flying Qualities and Flight Control Systems Design Criteria Experiment (OFQ) of the Shuttle Orbiter Experiments Program (OEX) is summarized. The tasks involved review of applicability of existing flying quality and flight control system specification and criteria for the Shuttle; identification of potentially crucial flying quality deficiencies; dynamic modeling of the Shuttle Orbiter pilot/vehicle system in the terminal flight phases; devising a nonintrusive experimental program for extraction and identification of vehicle dynamics, pilot control strategy, and approach and landing performance metrics, and preparation of an OEX approach to produce a data archive and optimize use of the data to develop flying qualities for future space shuttle craft in general. Analytic modeling of the Orbiter's unconventional closed-loop dynamics in landing, modeling pilot control strategies, verification of vehicle dynamics and pilot control strategy from flight data, review of various existent or proposed aircraft flying quality parameters and criteria in comparison with the unique dynamic characteristics and control aspects of the Shuttle in landing; and finally a summary of conclusions and recommendations for developing flying quality criteria and design guides for future Shuttle craft.
Aspirating Seal Development: Analytical Modeling and Seal Test Rig
NASA Technical Reports Server (NTRS)
Bagepalli, Bharat
1996-01-01
This effort is to develop large diameter (22 - 36 inch) Aspirating Seals for application in aircraft engines. Stein Seal Co. will be fabricating the 36-inch seal(s) for testing. GE's task is to establish a thorough understanding of the operation of Aspirating Seals through analytical modeling and full-scale testing. The two primary objectives of this project are to develop the analytical models of the aspirating seal system, to upgrade using GE's funds, GE's 50-inch seal test rig for testing the Aspirating Seal (back-to-back with a corresponding brush seal), test the aspirating seal(s) for seal closure, tracking and maneuver transients (tilt) at operating pressures and temperatures, and validate the analytical model. The objective of the analytical model development is to evaluate the transient and steady-state dynamic performance characteristics of the seal designed by Stein. The transient dynamic model uses a multi-body system approach: the Stator, Seal face and the rotor are treated as individual bodies with relative degrees of freedom. Initially, the thirty-six springs are represented as a single one trying to keep open the aspirating face. Stops (Contact elements) are provided between the stator and the seal (to compensate the preload in the fully-open position) and between the rotor face and Seal face (to detect rub). The secondary seal is considered as part of the stator. The film's load, damping and stiffness characteristics as functions of pressure and clearance are evaluated using a separate (NASA) code GFACE. Initially, a laminar flow theory is used. Special two-dimensional interpolation routines are written to establish exact film load and damping values at each integration time step. Additionally, other user-routines are written to read-in actual pressure, rpm, stator-growth and rotor growth data and, later, to transfer these as appropriate loads/motions in the system-dynamic model. The transient dynamic model evaluates the various motions, clearances and forces as the seals are subjected to different aircraft maneuvers: Windmilling restart; start-ground idle; ground idle-takeoff; takeoff-burst chop, etc. Results of this model show that the seal closes appropriately and does not ram into the rotor for all of the conditions analyzed. The rig upgrade design for testing Aspirating Seals has been completed. Long lead-time items (forgings, etc.) have been ordered.
Random Forests for Evaluating Pedagogy and Informing Personalized Learning
ERIC Educational Resources Information Center
Spoon, Kelly; Beemer, Joshua; Whitmer, John C.; Fan, Juanjuan; Frazee, James P.; Stronach, Jeanne; Bohonak, Andrew J.; Levine, Richard A.
2016-01-01
Random forests are presented as an analytics foundation for educational data mining tasks. The focus is on course- and program-level analytics including evaluating pedagogical approaches and interventions and identifying and characterizing at-risk students. As part of this development, the concept of individualized treatment effects (ITE) is…
Finding Waldo: Learning about Users from their Interactions.
Brown, Eli T; Ottley, Alvitta; Zhao, Helen; Quan Lin; Souvenir, Richard; Endert, Alex; Chang, Remco
2014-12-01
Visual analytics is inherently a collaboration between human and computer. However, in current visual analytics systems, the computer has limited means of knowing about its users and their analysis processes. While existing research has shown that a user's interactions with a system reflect a large amount of the user's reasoning process, there has been limited advancement in developing automated, real-time techniques that mine interactions to learn about the user. In this paper, we demonstrate that we can accurately predict a user's task performance and infer some user personality traits by using machine learning techniques to analyze interaction data. Specifically, we conduct an experiment in which participants perform a visual search task, and apply well-known machine learning algorithms to three encodings of the users' interaction data. We achieve, depending on algorithm and encoding, between 62% and 83% accuracy at predicting whether each user will be fast or slow at completing the task. Beyond predicting performance, we demonstrate that using the same techniques, we can infer aspects of the user's personality factors, including locus of control, extraversion, and neuroticism. Further analyses show that strong results can be attained with limited observation time: in one case 95% of the final accuracy is gained after a quarter of the average task completion time. Overall, our findings show that interactions can provide information to the computer about its human collaborator, and establish a foundation for realizing mixed-initiative visual analytics systems.
Multipurpose Crew Restraints for Long Duration Space Flights
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Baggerman, Susan; Ortiz, M. R.; Hua, L.; Sinnott, P.; Webb, L.
2004-01-01
With permanent human presence onboard the International Space Station (ISS), a crew will be living and working in microgravity, interfacing with their physical environment. Without optimum restraints and mobility aids (R&MA' s), the crewmembers may be handicapped for perfonning some of the on-orbit tasks. In addition to weightlessness, the confined nature of a spacecraft environment results in ergonomic challenges such as limited visibility and access to the activity area and may cause prolonged periods of unnatural postures. Thus, determining the right set of human factors requirements and providing an ergonomically designed environment are crucial to astronauts' well-being and productivity. The purpose of this project is to develop requirements and guidelines, and conceptual designs, for an ergonomically designed multi-purpose crew restraint. In order to achieve this goal, the project would involve development of functional and human factors requirements, design concept prototype development, analytical and computer modeling evaluations of concepts, two sets of micro gravity evaluations and preparation of an implementation plan. It is anticipated that developing functional and design requirements for a multi-purpose restraint would facilitate development of ergonomically designed restraints to accommodate the off-nominal but repetitive tasks, and minimize the performance degradation due to lack of optimum setup for onboard task performance. In addition, development of an ergonomically designed restraint concept prototype would allow verification and validation of the requirements defined. To date, we have identified "unique" tasks and areas of need, determine characteristics of "ideal" restraints, and solicit ideas for restraint and mobility aid concepts. Focus group meetings with representatives from training, safety, crew, human factors, engineering, payload developers, and analog environment representatives were key to assist in the development of a restraint concept based on previous flight experiences, the needs of future tasks, and crewmembers' preferences. Also, a catalog with existing IVA/EVA restraint and mobility aids has been developed. Other efforts included the ISS crew debrief data on restraints, compilation of data from MIR, Skylab and ISS on restraints, and investigating possibility of an in-flight evaluation of current restraint systems. Preliminary restraint concepts were developed and presented to long duration crewmembers and focus groups for feedback. Currently, a selection criterion is being refined for prioritizing the candidate concepts. Next steps include analytical and computer modeling evaluations of the selected candidate concepts, prototype development, and microgravity evaluations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.
This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics
Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.
2012-01-01
Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914
Heterogeneous fractionation profiles of meta-analytic coactivation networks.
Laird, Angela R; Riedel, Michael C; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L; Eickhoff, Simon B; Smith, Stephen M; Fox, Peter T; Sutherland, Matthew T
2017-04-01
Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d=20-300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how "parent" functional brain systems decompose into constituent "child" sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. Copyright © 2017 Elsevier Inc. All rights reserved.
Heterogeneous fractionation profiles of meta-analytic coactivation networks
Laird, Angela R.; Riedel, Michael C.; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L.; Eickhoff, Simon B.; Smith, Stephen M.; Fox, Peter T.; Sutherland, Matthew T.
2017-01-01
Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d = 20 to 300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how “parent” functional brain systems decompose into constituent “child” sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. PMID:28222386
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.
75 FR 71131 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-22
... impacts. To complete this task with scientific rigor, it will be necessary to collect high quality survey... instruments, methodologies, procedures, and analytical techniques for this task. Moreover, they have been pilot tested in 11 States. The tools and techniques were submitted for review, and were approved, by...
ERIC Educational Resources Information Center
Williams, George E.; Cuvo, Anthony J.
1986-01-01
Six severely handicapped clients were taught to perform upkeep responses on their air conditioner-heating unit, electric range, refrigerator, and electrical appliances. Results showed acquisition, long-term maintenance, and generalization of upkeep skills to a nontraining apartment. General task analyses were recommended for assessment and…
Mental Rotation and Diagrammatic Reasoning in Science
ERIC Educational Resources Information Center
Stieff, M.
2007-01-01
This article presents 3 studies that examine how students and experts employ mental rotation and a learned heuristic to solve chemistry tasks that involve spatial information. Results from Study 1 indicate that despite instruction in analytical strategies, students choose to employ mental rotation on canonical assessment tasks. In Study 2, experts…
Enhanced Fan Noise Modeling for Turbofan Engines
NASA Technical Reports Server (NTRS)
Krejsa, Eugene A.; Stone, James R.
2014-01-01
This report describes work by consultants to Diversitech Inc. for the NASA Glenn Research Center (GRC) to revise the fan noise prediction procedure based on fan noise data obtained in the 9- by 15 Foot Low-Speed Wind Tunnel at GRC. The purpose of this task is to begin development of an enhanced, analytical, more physics-based, fan noise prediction method applicable to commercial turbofan propulsion systems. The method is to be suitable for programming into a computational model for eventual incorporation into NASA's current aircraft system noise prediction computer codes. The scope of this task is in alignment with the mission of the Propulsion 21 research effort conducted by the coalition of NASA, state government, industry, and academia to develop aeropropulsion technologies. A model for fan noise prediction was developed based on measured noise levels for the R4 rotor with several outlet guide vane variations and three fan exhaust areas. The model predicts the complete fan noise spectrum, including broadband noise, tones, and for supersonic tip speeds, combination tones. Both spectra and directivity are predicted. Good agreement with data was achieved for all fan geometries. Comparisons with data from a second fan, the ADP fan, also showed good agreement.
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks. PMID:26496370
Wada, Kazushige; Nittono, Hiroshi
2004-06-01
The reasoning process in the Wason selection task was examined by measuring card inspection times in the letter-number and drinking-age problems. 24 students were asked to solve the problems presented on a computer screen. Only the card touched with a mouse pointer was visible, and the total exposure time of each card was measured. Participants were allowed to cancel their previous selections at any time. Although rethinking was encouraged, the cards once selected were rarely cancelled (10% of the total selections). Moreover, most of the cancelled cards were reselected (89% of the total cancellations). Consistent with previous findings, inspection times were longer for selected cards than for nonselected cards. These results suggest that card selections are determined largely by initial heuristic processes and rarely reversed by subsequent analytic processes. The present study gives further support for the heuristic-analytic dual process theory.
A Learning Framework for Winner-Take-All Networks with Stochastic Synapses.
Mostafa, Hesham; Cauwenberghs, Gert
2018-06-01
Many recent generative models make use of neural networks to transform the probability distribution of a simple low-dimensional noise process into the complex distribution of the data. This raises the question of whether biological networks operate along similar principles to implement a probabilistic model of the environment through transformations of intrinsic noise processes. The intrinsic neural and synaptic noise processes in biological networks, however, are quite different from the noise processes used in current abstract generative networks. This, together with the discrete nature of spikes and local circuit interactions among the neurons, raises several difficulties when using recent generative modeling frameworks to train biologically motivated models. In this letter, we show that a biologically motivated model based on multilayer winner-take-all circuits and stochastic synapses admits an approximate analytical description. This allows us to use the proposed networks in a variational learning setting where stochastic backpropagation is used to optimize a lower bound on the data log likelihood, thereby learning a generative model of the data. We illustrate the generality of the proposed networks and learning technique by using them in a structured output prediction task and a semisupervised learning task. Our results extend the domain of application of modern stochastic network architectures to networks where synaptic transmission failure is the principal noise mechanism.
Data Services and Transnational Access for European Geosciences Multi-Scale Laboratories
NASA Astrophysics Data System (ADS)
Funiciello, Francesca; Rosenau, Matthias; Sagnotti, Leonardo; Scarlato, Piergiorgio; Tesei, Telemaco; Trippanera, Daniele; Spires, Chris; Drury, Martyn; Kan-Parker, Mirjam; Lange, Otto; Willingshofer, Ernst
2016-04-01
The EC policy for research in the new millennium supports the development of european-scale research infrastructures. In this perspective, the existing research infrastructures are going to be integrated with the objective to increase their accessibility and to enhance the usability of their multidisciplinary data. Building up integrating Earth Sciences infrastructures in Europe is the mission of the Implementation Phase (IP) of the European Plate Observing System (EPOS) project (2015-2019). The integration of european multiscale laboratories - analytical, experimental petrology and volcanology, magnetic and analogue laboratories - plays a key role in this context and represents a specific task of EPOS IP. In the frame of the WP16 of EPOS IP working package 16, European geosciences multiscale laboratories aims to be linked, merging local infrastructures into a coherent and collaborative network. In particular, the EPOS IP WP16-task 4 "Data services" aims at standardize data and data products, already existing and newly produced by the participating laboratories, and made them available through a new digital platform. The following data and repositories have been selected for the purpose: 1) analytical and properties data a) on volcanic ash from explosive eruptions, of interest to the aviation industry, meteorological and government institutes, b) on magmas in the context of eruption and lava flow hazard evaluation, and c) on rock systems of key importance in mineral exploration and mining operations; 2) experimental data describing: a) rock and fault properties of importance for modelling and forecasting natural and induced subsidence, seismicity and associated hazards, b) rock and fault properties relevant for modelling the containment capacity of rock systems for CO2, energy sources and wastes, c) crustal and upper mantle rheology as needed for modelling sedimentary basin formation and crustal stress distributions, d) the composition, porosity, permeability, and frackability of reservoir rocks of interest in relation to unconventional resources and geothermal energy; 3) repository of analogue models on tectonic processes, from the plate to the reservoir scale, relevant to the understanding of Earth dynamics, geo-hazards and geo-energy; 4) paleomagnetic data, that are crucial a) for understanding the evolution of sedimentary basins and associated resources, and b) for charting geo-hazard frequency. EPOS IP WP16 - task 5 aims to create mechanisms and procedures for easy trans-national access to multiscale laboratory facilities. Moreover, the same task will coordinate all the activities in a pilot phase to test, validate and consolidate the over mentioned services and to provide a proof of concept for what will be offered beyond the completion of the EPOS IP.
Human, Lauren J; Thorson, Katherine R; Woolley, Joshua D; Mendes, Wendy Berry
2017-04-01
Intranasal administration of the hypothalamic neuropeptide oxytocin (OT) has, in some studies, been associated with positive effects on social perception and cognition. Similarly, positive emotion inductions can improve a range of perceptual and performance-based behaviors. In this exploratory study, we examined how OT administration and positive emotion inductions interact in their associations with social and analytical performance. Participants (N=124) were randomly assigned to receive an intranasal spray of OT (40IU) or placebo and then viewed one of three videos designed to engender one of the following emotion states: social warmth, pride, or an affectively neutral state. Following the emotion induction, participants completed social perception and analytical tasks. There were no significant main effects of OT condition on social perception tasks, failing to replicate prior research, or on analytical performance. Further, OT condition and positive emotion inductions did not interact with each other in their associations with social perception performance. However, OT condition and positive emotion manipulations did significantly interact in their associations with analytical performance. Specifically, combining positive emotion inductions with OT administration was associated with worse analytical performance, with the pride induction no longer benefiting performance and the warmth induction resulting in worse performance. In sum, we found little evidence for main or interactive effects of OT on social perception but preliminary evidence that OT administration may impair analytical performance when paired with positive emotion inductions. Copyright © 2017 Elsevier Inc. All rights reserved.
The European Gender Equality Index: Conceptual and Analytical Issues
ERIC Educational Resources Information Center
Bericat, Eduardo
2012-01-01
This article presents a composite indicator designed to measure and compare existing structural gender equality in the countries of the European Union. The construction of an index is always a complex task which requires making a great many important conceptual, analytical and empirical decisions. This complexity explains the wide variety of…
Challenges of Using Learning Analytics Techniques to Support Mobile Learning
ERIC Educational Resources Information Center
Arrigo, Marco; Fulantelli, Giovanni; Taibi, Davide
2015-01-01
Evaluation of Mobile Learning remains an open research issue, especially as regards the activities that take place outside the classroom. In this context, Learning Analytics can provide answers, and offer the appropriate tools to enhance Mobile Learning experiences. In this poster we introduce a task-interaction framework, using learning analytics…
ERIC Educational Resources Information Center
Murphy, P. Karen; Firetto, Carla M.; Wei, Liwei; Li, Mengyi; Croninger, Rachel M. V.
2016-01-01
Many American students struggle to perform even basic comprehension of text, such as locating information, determining the main idea, or supporting details of a story. Even more students are inadequately prepared to complete more complex tasks, such as critically or analytically interpreting information in text or making reasoned decisions from…
ERIC Educational Resources Information Center
Feng, Z. Vivian; Buchman, Joseph T.
2012-01-01
The potential of replacing petroleum fuels with renewable biofuels has drawn significant public interest. Many states have imposed biodiesel mandates or incentives to use commercial biodiesel blends. We present an inquiry-driven experiment where students are given the tasks to gather samples, develop analytical methods using various instrumental…
Analytic Networks in Music Task Definition.
ERIC Educational Resources Information Center
Piper, Richard M.
For a student to acquire the conceptual systems of a discipline, the designer must reflect that structure or analytic network in his curriculum. The four networks identified for music and used in the development of the Southwest Regional Laboratory (SWRL) Music Program are the variable-value, the whole-part, the process-stage, and the class-member…
1980-11-01
Occlusion 3.1 Single Measures 3. Primary Task 3.2 Multiple Measures 3.3 Math Modeling 4.1.1 PFF 4.1.2 CSR 4.1.3 M,0 4.1.4 MW 4.1.5 UG3 4.1.6 ZCP 4.1 Single... modeling methodology; and (4) validation of the analytic/predictive methodology In a system design, development, and test effort." Chapter 9: "A central...2.3 Occlusion P S P S S P -P 3.1 Single Measure-Primary S S S S S S S 3.2 Multiple Measure-Primary S S IS S S S S K 3.3 Math Modeling ~ 4.1.7 Eye and
Alfredsson, Jayne; Plichart, Patrick; Zary, Nabil
2012-01-01
Research on computer supported scoring of assessments in health care education has mainly focused on automated scoring. Little attention has been given to how informatics can support the currently predominant human-based grading approach. This paper reports steps taken to develop a model for a computer supported scoring process that focuses on optimizing a task that was previously undertaken without computer support. The model was also implemented in the open source assessment platform TAO in order to study its benefits. Ability to score test takers anonymously, analytics on the graders reliability and a more time efficient process are example of observed benefits. A computer supported scoring will increase the quality of the assessment results.
Semi-analytical model of cross-borehole flow experiments for fractured medium characterization
NASA Astrophysics Data System (ADS)
Roubinet, D.; Irving, J.; Day-Lewis, F. D.
2014-12-01
The study of fractured rocks is extremely important in a wide variety of research fields where the fractures and faults can represent either rapid access to some resource of interest or potential pathways for the migration of contaminants in the subsurface. Identification of their presence and determination of their properties are critical and challenging tasks that have led to numerous fracture characterization methods. Among these methods, cross-borehole flowmeter analysis aims to evaluate fracture connections and hydraulic properties from vertical-flow-velocity measurements conducted in one or more observation boreholes under forced hydraulic conditions. Previous studies have demonstrated that analysis of these data can provide important information on fracture connectivity, transmissivity, and storativity. Estimating these properties requires the development of analytical and/or numerical modeling tools that are well adapted to the complexity of the problem. Quantitative analysis of cross-borehole flowmeter experiments, in particular, requires modeling formulations that: (i) can be adapted to a variety of fracture and experimental configurations; (ii) can take into account interactions between the boreholes because their radii of influence may overlap; and (iii) can be readily cast into an inversion framework that allows for not only the estimation of fracture hydraulic properties, but also an assessment of estimation error. To this end, we present a new semi-analytical formulation for cross-borehole flow in fractured media that links transient vertical-flow velocities measured in one or a series of observation wells during hydraulic forcing to the transmissivity and storativity of the fractures intersected by these wells. Our model addresses the above needs and provides a flexible and computationally efficient semi-analytical framework having strong potential for future adaptation to more complex configurations. The proposed modeling approach is demonstrated in the context of sensitivity analysis for a relatively simple two-fracture synthetic problem, as well as in the context of field-data analysis for fracture connectivity and estimation of corresponding hydraulic properties.
NASA Astrophysics Data System (ADS)
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
Task analysis exemplified: the process of resolving unfinished business.
Greenberg, L S; Foerster, F S
1996-06-01
The steps of a task-analytic research program designed to identify the in-session performances involved in resolving lingering bad feelings toward a significant other are described. A rational-empirical methodology of repeatedly cycling between rational conjecture and empirical observations is demonstrated as a method of developing an intervention manual and the components of client processes of resolution. A refined model of the change process developed by these procedures is validated by comparing 11 successful and 11 unsuccessful performances. Four performance components-intense expression of feeling, expression of need, shift in representation of other, and self-validation or understanding of the other-were found to discriminate between resolution and nonresolution performances. These components were measured on 4 process measures: the Structural Analysis of Social Behavior, the Experiencing Scale, the Client's Emotional Arousal Scale, and a need scale.
NASA Technical Reports Server (NTRS)
Moua, Cheng M.; Cox, Timothy H.; McWherter, Shaun C.
2008-01-01
The Quiet Spike(TradeMark) F-15B flight research program investigated supersonic shock reduction using a 24-ft telescoping nose boom on an F-15B airplane. The program goal was to collect flight data for model validation up to 1.8 Mach. In the area of stability and controls, the primary concerns were to assess the potential destabilizing effect of the oversized nose boom on the stability, controllability, and handling qualities of the airplane and to ensure adequate stability margins across the entire research flight envelope. This paper reports on the stability and control analytical methods, flight envelope clearance approach, and flight test results of the F-15B telescoping nose boom configuration. Also discussed are brief pilot commentary on typical piloting tasks and refueling tasks.
Algorithmic mechanisms for reliable crowdsourcing computation under collusion.
Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A; Pareja, Daniel
2015-01-01
We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers' decision of whether to comply (compute the task) or not (return a bogus result to save the computation cost) as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game.
Algorithmic Mechanisms for Reliable Crowdsourcing Computation under Collusion
Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Pareja, Daniel
2015-01-01
We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers’ decision of whether to comply (compute the task) or not (return a bogus result to save the computation cost) as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game. PMID:25793524
Preparation of Morpheus Vehicle for Vacuum Environment Testing
NASA Technical Reports Server (NTRS)
Sandoval, Armando
2016-01-01
The main objective for this summer 2016 tour was to prepare the Morpheus vehicle for its upcoming test inside Plum Brook's vacuum chamber at NASA John H. Glenn Research Center. My contributions towards this project were mostly analytical in nature, providing numerical models to validate test data, generating computer aided analyses for the structure support of the vehicle's engine, and designing a vacuum can that is to protect the high speed camera used during testing. Furthermore, I was also tasked with designing a tank toroidal spray bar system.
NASA Astrophysics Data System (ADS)
Novikov, Dmitrii K.; Diligenskii, Dmitrii S.
2018-01-01
The article considers the work of some squeeze film damper with elastic rings parts. This type of damper is widely used in gas turbine engines supports. Nevertheless, modern analytical solutions have a number of limitations. The article considers the behavior of simple hydrodynamic damping systems. It describes the analysis of fluid-solid interaction simulation applicability for the defying properties of hydrodynamic damper with elastic rings (“allison ring”). There are some recommendations on the fluid structural interaction analysis of the hydrodynamic damper with elastic rings.
Collaborative Web-Enabled GeoAnalytics Applied to OECD Regional Data
NASA Astrophysics Data System (ADS)
Jern, Mikael
Recent advances in web-enabled graphics technologies have the potential to make a dramatic impact on developing collaborative geovisual analytics (GeoAnalytics). In this paper, tools are introduced that help establish progress initiatives at international and sub-national levels aimed at measuring and collaborating, through statistical indicators, economic, social and environmental developments and to engage both statisticians and the public in such activities. Given this global dimension of such a task, the “dream” of building a repository of progress indicators, where experts and public users can use GeoAnalytics collaborative tools to compare situations for two or more countries, regions or local communities, could be accomplished. While the benefits of GeoAnalytics tools are many, it remains a challenge to adapt these dynamic visual tools to the Internet. For example, dynamic web-enabled animation that enables statisticians to explore temporal, spatial and multivariate demographics data from multiple perspectives, discover interesting relationships, share their incremental discoveries with colleagues and finally communicate selected relevant knowledge to the public. These discoveries often emerge through the diverse backgrounds and experiences of expert domains and are precious in a creative analytics reasoning process. In this context, we introduce a demonstrator “OECD eXplorer”, a customized tool for interactively analyzing, and collaborating gained insights and discoveries based on a novel story mechanism that capture, re-use and share task-related explorative events.
User-Centered Evaluation of Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean C.
Visual analytics systems are becoming very popular. More domains now use interactive visualizations to analyze the ever-increasing amount and heterogeneity of data. More novel visualizations are being developed for more tasks and users. We need to ensure that these systems can be evaluated to determine that they are both useful and usable. A user-centered evaluation for visual analytics needs to be developed for these systems. While many of the typical human-computer interaction (HCI) evaluation methodologies can be applied as is, others will need modification. Additionally, new functionality in visual analytics systems needs new evaluation methodologies. There is a difference betweenmore » usability evaluations and user-centered evaluations. Usability looks at the efficiency, effectiveness, and user satisfaction of users carrying out tasks with software applications. User-centered evaluation looks more specifically at the utility provided to the users by the software. This is reflected in the evaluations done and in the metrics used. In the visual analytics domain this is very challenging as users are most likely experts in a particular domain, the tasks they do are often not well defined, the software they use needs to support large amounts of different kinds of data, and often the tasks last for months. These difficulties are discussed more in the section on User-centered Evaluation. Our goal is to provide a discussion of user-centered evaluation practices for visual analytics, including existing practices that can be carried out and new methodologies and metrics that need to be developed and agreed upon by the visual analytics community. The material provided here should be of use for both researchers and practitioners in the field of visual analytics. Researchers and practitioners in HCI and interested in visual analytics will find this information useful as well as a discussion on changes that need to be made to current HCI practices to make them more suitable to visual analytics. A history of analysis and analysis techniques and problems is provided as well as an introduction to user-centered evaluation and various evaluation techniques for readers from different disciplines. The understanding of these techniques is imperative if we wish to support analysis in the visual analytics software we develop. Currently the evaluations that are conducted and published for visual analytics software are very informal and consist mainly of comments from users or potential users. Our goal is to help researchers in visual analytics to conduct more formal user-centered evaluations. While these are time-consuming and expensive to carryout, the outcomes of these studies will have a defining impact on the field of visual analytics and help point the direction for future features and visualizations to incorporate. While many researchers view work in user-centered evaluation as a less-than-exciting area to work, the opposite is true. First of all, the goal is user-centered evaluation is to help visual analytics software developers, researchers, and designers improve their solutions and discover creative ways to better accommodate their users. Working with the users is extremely rewarding as well. While we use the term “users” in almost all situations there are a wide variety of users that all need to be accommodated. Moreover, the domains that use visual analytics are varied and expanding. Just understanding the complexities of a number of these domains is exciting. Researchers are trying out different visualizations and interactions as well. And of course, the size and variety of data are expanding rapidly. User-centered evaluation in this context is rapidly changing. There are no standard processes and metrics and thus those of us working on user-centered evaluation must be creative in our work with both the users and with the researchers and developers.« less
Development of a Subjective Evaluation Tool for Assessing Marksmanship Training Effectiveness
2013-01-28
used to break down the marksmanship domain, as presented in the USMC Rifle Marksmanship Manual, into sub-tasks that were converted into training-task... Mangos for his expertise in survey development and, along with Dr. Joseph Chandler, for guidance on the analysis of a complicated data set; Mr. Clarke...alternative to lengthy and resource-demanding training effectiveness evaluations. A task analytic approach was used to break down the marksmanship domain, as
Alvarez-Casado, Enrique; Hernandez-Soto, Aquiles; Tello, Sandoval; Gual, Rosa
2012-01-01
Occupational musculoskeletal disorders in the upper limbs and its consequences on the impact and prevalence in the work force are subject of many investigations in almost all the production fields. However, the exposure to this kind of risk factor on urban gardeners has not been well studied so far. The kind of plant varieties used in the parks, the tools that they use, as much as the necessary actions for the maintenance of the park, have an impact on the biomechanical overload of the upper limbs. Additionally, the analysis of the exposure to the biomechanical overload on upper limbs in gardening work is a complex task, mainly because it is an activity highly variable and of annual cycle. For this reason an analytical model for risk exposure evaluation is necessary. During this research the work activity of 29 gardeners in 3 urban parks of Barcelona has been analyzed. Each park has a specific acting plan, in relation with the quantity and the typology of vegetal species, its classification and the season of the year. Work and observation and video recording sessions on-site were conducted. The video-graphic registration was done on workers without any prior musculoskeletal disorder and with a minimum labour experience of 5 years. Moreover, the analysis of saturation time, considered as the relation of the repetitive working hours in reference with the hours of effective work was done. Using the registered tasks on video, the biomechanical overload on upper limbs applying the OCRA Checklist method was analyzed. A methodological procedure to analyze the risk exposure in annual working cycle has been proposed. The results that we got allow us to get information that can help in the assignment of the tasks and in the training of staff, as well as in the recommendations of the urban landscape's design. All these aspects have the goal to decrease the risk to develop work-related musculoskeletal disorders.
Jones, Graham R D; Albarede, Stephanie; Kesseler, Dagmar; MacKenzie, Finlay; Mammen, Joy; Pedersen, Morten; Stavelin, Anne; Thelen, Marc; Thomas, Annette; Twomey, Patrick J; Ventura, Emma; Panteghini, Mauro
2017-06-27
External Quality Assurance (EQA) is vital to ensure acceptable analytical quality in medical laboratories. A key component of an EQA scheme is an analytical performance specification (APS) for each measurand that a laboratory can use to assess the extent of deviation of the obtained results from the target value. A consensus conference held in Milan in 2014 has proposed three models to set APS and these can be applied to setting APS for EQA. A goal arising from this conference is the harmonisation of EQA APS between different schemes to deliver consistent quality messages to laboratories irrespective of location and the choice of EQA provider. At this time there are wide differences in the APS used in different EQA schemes for the same measurands. Contributing factors to this variation are that the APS in different schemes are established using different criteria, applied to different types of data (e.g. single data points, multiple data points), used for different goals (e.g. improvement of analytical quality; licensing), and with the aim of eliciting different responses from participants. This paper provides recommendations from the European Federation of Laboratory Medicine (EFLM) Task and Finish Group on Performance Specifications for External Quality Assurance Schemes (TFG-APSEQA) and on clear terminology for EQA APS. The recommended terminology covers six elements required to understand APS: 1) a statement on the EQA material matrix and its commutability; 2) the method used to assign the target value; 3) the data set to which APS are applied; 4) the applicable analytical property being assessed (i.e. total error, bias, imprecision, uncertainty); 5) the rationale for the selection of the APS; and 6) the type of the Milan model(s) used to set the APS. The terminology is required for EQA participants and other interested parties to understand the meaning of meeting or not meeting APS.
The Task-Based Approach in Language Teaching
ERIC Educational Resources Information Center
Sánchez, Aquilino
2004-01-01
The Task-Based Approach (TBA) has gained popularity in the field of language teaching since the last decade of the 20th Century and significant scholars have joined the discussion and increased the amount of analytical studies on the issue. Nevertheless experimental research is poor, and the tendency of some of the scholars is nowadays shifting…
ERIC Educational Resources Information Center
Herrmann, Daniel; Felfe, Jörg
2013-01-01
Transformational leadership is supposed to enhance employees' creativity. However, results of meta-analytic research on the relationships between transformational leadership and creativity fell short of expectations. In addition, the coefficients showed a huge variability. In this study, it was argued that relevant task and employee…
Investigating Student Choices in Performing Higher-Level Comprehension Tasks Using TED
ERIC Educational Resources Information Center
Bianchi, Francesca; Marenzi, Ivana
2016-01-01
The current paper describes a first experiment in the use of TED talks and open tagging exercises to train higher-level comprehension skills, and of automatic logging of the student's actions to investigate the student choices while performing analytical tasks. The experiment took advantage of an interactive learning platform--LearnWeb--that…
ERIC Educational Resources Information Center
Ramful, Ajay; Ho, Siew Yin; Lowrie, Tom
2015-01-01
This inquiry presents two fine-grained case studies of students demonstrating different levels of cognitive functioning in relation to bilateral symmetry and reflection. The two students were asked to solve four sets of tasks and articulate their reasoning in task-based interviews. The first participant, Brittany, focused essentially on three…
Light aircraft crash safety program
NASA Technical Reports Server (NTRS)
Thomson, R. G.; Hayduk, R. J.
1974-01-01
NASA is embarked upon research and development tasks aimed at providing the general aviation industry with a reliable crashworthy airframe design technology. The goals of the NASA program are: reliable analytical techniques for predicting the nonlinear behavior of structures; significant design improvements of airframes; and simulated full-scale crash test data. The analytical tools will include both simplified procedures for estimating energy absorption characteristics and more complex computer programs for analysis of general airframe structures under crash loading conditions. The analytical techniques being developed both in-house and under contract are described, and a comparison of some analytical predictions with experimental results is shown.
Finding Waldo: Learning about Users from their Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Eli T.; Ottley, Alvitta; Zhao, Helen
Visual analytics is inherently a collaboration between human and computer. However, in current visual analytics systems, the computer has limited means of knowing about its users and their analysis processes. While existing research has shown that a user’s interactions with a system reflect a large amount of the user’s reasoning process, there has been limited advancement in developing automated, real-time techniques that mine interactions to learn about the user. In this paper, we demonstrate that we can accurately predict a user’s task performance and infer some user personality traits by using machine learning techniques to analyze interaction data. Specifically, wemore » conduct an experiment in which participants perform a visual search task and we apply well-known machine learning algorithms to three encodings of the users interaction data. We achieve, depending on algorithm and encoding, between 62% and 96% accuracy at predicting whether each user will be fast or slow at completing the task. Beyond predicting performance, we demonstrate that using the same techniques, we can infer aspects of the user’s personality factors, including locus of control, extraversion, and neuroticism. Further analyses show that strong results can be attained with limited observation time, in some cases, 82% of the final accuracy is gained after a quarter of the average task completion time. Overall, our findings show that interactions can provide information to the computer about its human collaborator, and establish a foundation for realizing mixed- initiative visual analytics systems.« less
Mirel, Barbara
2009-02-13
Current usability studies of bioinformatics tools suggest that tools for exploratory analysis support some tasks related to finding relationships of interest but not the deep causal insights necessary for formulating plausible and credible hypotheses. To better understand design requirements for gaining these causal insights in systems biology analyses a longitudinal field study of 15 biomedical researchers was conducted. Researchers interacted with the same protein-protein interaction tools to discover possible disease mechanisms for further experimentation. Findings reveal patterns in scientists' exploratory and explanatory analysis and reveal that tools positively supported a number of well-structured query and analysis tasks. But for several of scientists' more complex, higher order ways of knowing and reasoning the tools did not offer adequate support. Results show that for a better fit with scientists' cognition for exploratory analysis systems biology tools need to better match scientists' processes for validating, for making a transition from classification to model-based reasoning, and for engaging in causal mental modelling. As the next great frontier in bioinformatics usability, tool designs for exploratory systems biology analysis need to move beyond the successes already achieved in supporting formulaic query and analysis tasks and now reduce current mismatches with several of scientists' higher order analytical practices. The implications of results for tool designs are discussed.
Priority queues with bursty arrivals of incoming tasks
NASA Astrophysics Data System (ADS)
Masuda, N.; Kim, J. S.; Kahng, B.
2009-03-01
Recently increased accessibility of large-scale digital records enables one to monitor human activities such as the interevent time distributions between two consecutive visits to a web portal by a single user, two consecutive emails sent out by a user, two consecutive library loans made by a single individual, etc. Interestingly, those distributions exhibit a universal behavior, D(τ)˜τ-δ , where τ is the interevent time, and δ≃1 or 3/2 . The universal behaviors have been modeled via the waiting-time distribution of a task in the queue operating based on priority; the waiting time follows a power-law distribution Pw(τ)˜τ-α with either α=1 or 3/2 depending on the detail of queuing dynamics. In these models, the number of incoming tasks in a unit time interval has been assumed to follow a Poisson-type distribution. For an email system, however, the number of emails delivered to a mail box in a unit time we measured follows a power-law distribution with general exponent γ . For this case, we obtain analytically the exponent α , which is not necessarily 1 or 3/2 and takes nonuniversal values depending on γ . We develop the generating function formalism to obtain the exponent α , which is distinct from the continuous time approximation used in the previous studies.
BioStar models of clinical and genomic data for biomedical data warehouse design
Wang, Liangjiang; Ramanathan, Murali
2008-01-01
Biomedical research is now generating large amounts of data, ranging from clinical test results to microarray gene expression profiles. The scale and complexity of these datasets give rise to substantial challenges in data management and analysis. It is highly desirable that data warehousing and online analytical processing technologies can be applied to biomedical data integration and mining. The major difficulty probably lies in the task of capturing and modelling diverse biological objects and their complex relationships. This paper describes multidimensional data modelling for biomedical data warehouse design. Since the conventional models such as star schema appear to be insufficient for modelling clinical and genomic data, we develop a new model called BioStar schema. The new model can capture the rich semantics of biomedical data and provide greater extensibility for the fast evolution of biological research methodologies. PMID:18048122
NASA Astrophysics Data System (ADS)
Perelló, Josep; Masoliver, Jaume; Kasprzak, Andrzej; Kutner, Ryszard
2008-09-01
Social, technological, and economic time series are divided by events which are usually assumed to be random, albeit with some hierarchical structure. It is well known that the interevent statistics observed in these contexts differs from the Poissonian profile by being long-tailed distributed with resting and active periods interwoven. Understanding mechanisms generating consistent statistics has therefore become a central issue. The approach we present is taken from the continuous-time random-walk formalism and represents an analytical alternative to models of nontrivial priority that have been recently proposed. Our analysis also goes one step further by looking at the multifractal structure of the interevent times of human decisions. We here analyze the intertransaction time intervals of several financial markets. We observe that empirical data describe a subtle multifractal behavior. Our model explains this structure by taking the pausing-time density in the form of a superstatistics where the integral kernel quantifies the heterogeneous nature of the executed tasks. A stretched exponential kernel provides a multifractal profile valid for a certain limited range. A suggested heuristic analytical profile is capable of covering a broader region.
NASA Technical Reports Server (NTRS)
Nagaraja, K. S.; Kraft, R. H.
1999-01-01
The HSCT Flight Controls Group has developed longitudinal control laws, utilizing PTC aeroelastic flexible models to minimize aeroservoelastic interaction effects, for a number of flight conditions. The control law design process resulted in a higher order controller and utilized a large number of sensors distributed along the body for minimizing the flexibility effects. Processes were developed to implement these higher order control laws for performing the dynamic gust loads and flutter analyses. The processes and its validation were documented in Reference 2, for selected flight condition. The analytical results for additional flight conditions are presented in this document for further validation.
A Boltzmann machine for the organization of intelligent machines
NASA Technical Reports Server (NTRS)
Moed, Michael C.; Saridis, George N.
1989-01-01
In the present technological society, there is a major need to build machines that would execute intelligent tasks operating in uncertain environments with minimum interaction with a human operator. Although some designers have built smart robots, utilizing heuristic ideas, there is no systematic approach to design such machines in an engineering manner. Recently, cross-disciplinary research from the fields of computers, systems AI and information theory has served to set the foundations of the emerging area of the design of intelligent machines. Since 1977 Saridis has been developing an approach, defined as Hierarchical Intelligent Control, designed to organize, coordinate and execute anthropomorphic tasks by a machine with minimum interaction with a human operator. This approach utilizes analytical (probabilistic) models to describe and control the various functions of the intelligent machine structured by the intuitively defined principle of Increasing Precision with Decreasing Intelligence (IPDI) (Saridis 1979). This principle, even though resembles the managerial structure of organizational systems (Levis 1988), has been derived on an analytic basis by Saridis (1988). The purpose is to derive analytically a Boltzmann machine suitable for optimal connection of nodes in a neural net (Fahlman, Hinton, Sejnowski, 1985). Then this machine will serve to search for the optimal design of the organization level of an intelligent machine. In order to accomplish this, some mathematical theory of the intelligent machines will be first outlined. Then some definitions of the variables associated with the principle, like machine intelligence, machine knowledge, and precision will be made (Saridis, Valavanis 1988). Then a procedure to establish the Boltzmann machine on an analytic basis will be presented and illustrated by an example in designing the organization level of an Intelligent Machine. A new search technique, the Modified Genetic Algorithm, is presented and proved to converge to the minimum of a cost function. Finally, simulations will show the effectiveness of a variety of search techniques for the intelligent machine.
NASA Astrophysics Data System (ADS)
Hamann, H.; Jimenez Marianno, F.; Klein, L.; Albrecht, C.; Freitag, M.; Hinds, N.; Lu, S.
2015-12-01
A big data geospatial analytics platform:Physical Analytics Information Repository and Services (PAIRS)Fernando Marianno, Levente Klein, Siyuan Lu, Conrad Albrecht, Marcus Freitag, Nigel Hinds, Hendrik HamannIBM TJ Watson Research Center, Yorktown Heights, NY 10598A major challenge in leveraging big geospatial data sets is the ability to quickly integrate multiple data sources into physical and statistical models and be run these models in real time. A geospatial data platform called Physical Analytics Information and Services (PAIRS) is developed on top of open source hardware and software stack to manage Terabyte of data. A new data interpolation and re gridding is implemented where any geospatial data layers can be associated with a set of global grid where the grid resolutions is doubling for consecutive layers. Each pixel on the PAIRS grid have an index that is a combination of locations and time stamp. The indexing allow quick access to data sets that are part of a global data layers and allowing to retrieve only the data of interest. PAIRS takes advantages of parallel processing framework (Hadoop) in a cloud environment to digest, curate, and analyze the data sets while being very robust and stable. The data is stored on a distributed no-SQL database (Hbase) across multiple server, data upload and retrieval is parallelized where the original analytics task is broken up is smaller areas/volume, analyzed independently, and then reassembled for the original geographical area. The differentiating aspect of PAIRS is the ability to accelerate model development across large geographical regions and spatial resolution ranging from 0.1 m up to hundreds of kilometer. System performance is benchmarked on real time automated data ingestion and retrieval of Modis and Landsat data layers. The data layers are curated for sensor error, verified for correctness, and analyzed statistically to detect local anomalies. Multi-layer query enable PAIRS to filter different data layers based on specific conditions (e.g analyze flooding risk of a property based on topography, soil ability to hold water, and forecasted precipitation) or retrieve information about locations that share similar weather and vegetation patterns during extreme weather events like heat wave.
Planning, evaluation and analytical studies in planetary quarantine and spacecraft sterilization
NASA Technical Reports Server (NTRS)
1972-01-01
The technical and analytical support used to aid in developing requirements for planetary quarantine are presented. The investigation was divided into 8 work tasks which are presented in tabular form. Data include methods of sterilization, safety margins for quarantine, revision of contamination logs for Mars and Venus, and estimates of encapsulated and 'free' microbial burden.
ERIC Educational Resources Information Center
Kelly, Stephanie; Rice, Christopher; Wyatt, Bryce; Ducking, Johnny; Denton, Zachary
2015-01-01
There is global concern regarding the increased prevalence of math anxiety among college students, which is credited for a decrease in analytical degree completion rates and lower self-confidence among students in their ability to complete analytical tasks in the real world. The present study identified that, as expected, displays of instructional…
The overall goal of this task is to help reduce the uncertainties in the assessment of environmental health and human exposure by better characterizing hazardous wastes through cost-effective analytical methods. Research projects are directed towards the applied development and ...
ERIC Educational Resources Information Center
Newcomer, Kathryn; Brass, Clinton T.
2016-01-01
The "performance movement" has been a subject of enthusiasm and frustration for evaluators. Performance measurement, data analytics, and program evaluation have been treated as different tasks, and those addressing them speak their own languages in their own circles. We suggest that situating performance measurement and data analytics…
Heave-pitch-roll analysis and testing of air cushion landing systems
NASA Technical Reports Server (NTRS)
Boghani, A. B.; Captain, K. M.; Wormley, D. N.
1978-01-01
The analytical tools (analysis and computer simulation) needed to explain and predict the dynamic operation of air cushion landing systems (ACLS) is described. The following tasks were performed: the development of improved analytical models for the fan and the trunk; formulation of a heave pitch roll analysis for the complete ACLS; development of a general purpose computer simulation to evaluate landing and taxi performance of an ACLS equipped aircraft; and the verification and refinement of the analysis by comparison with test data obtained through lab testing of a prototype cushion. Demonstration of simulation capabilities through typical landing and taxi simulation of an ACLS aircraft are given. Initial results show that fan dynamics have a major effect on system performance. Comparison with lab test data (zero forward speed) indicates that the analysis can predict most of the key static and dynamic parameters (pressure, deflection, acceleration, etc.) within a margin of a 10 to 25 percent.
Weeks, Margaret R; Li, Jianghong; Lounsbury, David; Green, Helena Danielle; Abbott, Maryann; Berman, Marcie; Rohena, Lucy; Gonzalez, Rosely; Lang, Shawn; Mosher, Heather
2017-12-01
Achieving community-level goals to eliminate the HIV epidemic requires coordinated efforts through community consortia with a common purpose to examine and critique their own HIV testing and treatment (T&T) care system and build effective tools to guide their efforts to improve it. Participatory system dynamics (SD) modeling offers conceptual, methodological, and analytical tools to engage diverse stakeholders in systems conceptualization and visual mapping of dynamics that undermine community-level health outcomes and identify those that can be leveraged for systems improvement. We recruited and engaged a 25-member multi-stakeholder Task Force, whose members provide or utilize HIV-related services, to participate in SD modeling to examine and address problems of their local HIV T&T service system. Findings from the iterative model building sessions indicated Task Force members' increasingly complex understanding of the local HIV care system and demonstrated their improved capacity to visualize and critique multiple models of the HIV T&T service system and identify areas of potential leverage. Findings also showed members' enhanced communication and consensus in seeking deeper systems understanding and options for solutions. We discuss implications of using these visual SD models for subsequent simulation modeling of the T&T system and for other community applications to improve system effectiveness. © Society for Community Research and Action 2017.
Golightly, Andrew; Wilkinson, Darren J.
2011-01-01
Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583
Developing Healthcare Data Analytics APPs with Open Data Science Tools.
Hao, Bibo; Sun, Wen; Yu, Yiqin; Xie, Guotong
2017-01-01
Recent advances in big data analytics provide more flexible, efficient, and open tools for researchers to gain insight from healthcare data. Whilst many tools require researchers to develop programs with programming languages like Python, R and so on, which is not a skill set grasped by many researchers in the healthcare data analytics area. To make data science more approachable, we explored existing tools and developed a practice that can help data scientists convert existing analytics pipelines to user-friendly analytics APPs with rich interactions and features of real-time analysis. With this practice, data scientists can develop customized analytics pipelines as APPs in Jupyter Notebook and disseminate them to other researchers easily, and researchers can benefit from the shared notebook to perform analysis tasks or reproduce research results much more easily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kris A.; Scholtz, Jean; Whiting, Mark A.
The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. This paper details the various types of evaluations that may be conducted using the Repository information. Inmore » this paper we describe how developers can do informal evaluations of various aspects of their visual analytics environments using VAST Challenge information. Aspects that can be evaluated include the appropriateness of the software for various tasks, the various data types and formats that can be accommodated, the effectiveness and efficiency of the process supported by the software, and the intuitiveness of the visualizations and interactions. Researchers can compare their visualizations and interactions to those submitted to determine novelty. In addition, the paper provides pointers to various guidelines that software teams can use to evaluate the usability of their software. While these evaluations are not a replacement for formal evaluation methods, this information can be extremely useful during the development of visual analytics environments.« less
Modeling strategic use of human computer interfaces with novel hidden Markov models
Mariano, Laura J.; Poore, Joshua C.; Krum, David M.; Schwartz, Jana L.; Coskren, William D.; Jones, Eric M.
2015-01-01
Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM) for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game's functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic task pursuit. PMID:26191026
The manual control of vehicles undergoing slow transitions in dynamic characteristics
NASA Technical Reports Server (NTRS)
Moriarty, T. E.
1974-01-01
The manual control was studied of a vehicle with slowly time-varying dynamics to develop analytic and computer techniques necessary for the study of time-varying systems. The human operator is considered as he controls a time-varying plant in which the changes are neither abrupt nor so slow that the time variations are unimportant. An experiment in which pilots controlled the longitudinal mode of a simulated time-varying aircraft is described. The vehicle changed from a pure double integrator to a damped second order system, either instantaneously or smoothly over time intervals of 30, 75, or 120 seconds. The regulator task consisted of trying to null the error term resulting from injected random disturbances with bandwidths of 0.8, 1.4, and 2.0 radians per second. Each of the twelve experimental conditons was replicated ten times. It is shown that the pilot's performance in the time-varying task is essentially equivalent to his performance in stationary tasks which correspond to various points in the transition. A rudimentary model for the pilot-vehicle-regulator is presented.
Age related differences in the strategies used by middle aged adults to solve a block design task.
Rozencwajg, P; Cherfi, M; Ferrandez, A M; Lautrey, J; Lemoine, C; Loarer, E
2005-01-01
In the present study, it was proposed to investigate the effects of aging on the strategies used to solve a block design task and to establish whether these strategies may be associated with differential patterns of ability. Two groups of subjects, 30 young adults (aged 20-35 years) and 30 middle-aged adults (aged 45-60 years) were set a computer version of the Kohs task and a battery of tests. An age-related decrease in fluid intelligence (Gf) and visual-spatial ability (Gv) was observed, along with the fact that most of the older subjects used a global strategy rather than a synthetic one. On the other hand, while continuing to use strategies of the analytic type, the older subjects looked more frequently at the model and scored high on crystallized intelligence (Gc). These findings are discussed from two different points of view: the theory of hierarchical stimuli and the hypothesis that metacognitive ability, which is thought to rely on Gc, may increase with age, and thus compensate for the loss of Gf and Gv.
Computing the Social Brain Connectome Across Systems and States.
Alcalá-López, Daniel; Smallwood, Jonathan; Jefferies, Elizabeth; Van Overwalle, Frank; Vogeley, Kai; Mars, Rogier B; Turetsky, Bruce I; Laird, Angela R; Fox, Peter T; Eickhoff, Simon B; Bzdok, Danilo
2017-05-18
Social skills probably emerge from the interaction between different neural processing levels. However, social neuroscience is fragmented into highly specialized, rarely cross-referenced topics. The present study attempts a systematic reconciliation by deriving a social brain definition from neural activity meta-analyses on social-cognitive capacities. The social brain was characterized by meta-analytic connectivity modeling evaluating coactivation in task-focused brain states and physiological fluctuations evaluating correlations in task-free brain states. Network clustering proposed a functional segregation into (1) lower sensory, (2) limbic, (3) intermediate, and (4) high associative neural circuits that together mediate various social phenomena. Functional profiling suggested that no brain region or network is exclusively devoted to social processes. Finally, nodes of the putative mirror-neuron system were coherently cross-connected during tasks and more tightly coupled to embodied simulation systems rather than abstract emulation systems. These first steps may help reintegrate the specialized research agendas in the social and affective sciences. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Custers, Eugène J F M
2013-08-01
Recently, human reasoning, problem solving, and decision making have been viewed as products of two separate systems: "System 1," the unconscious, intuitive, or nonanalytic system, and "System 2," the conscious, analytic, or reflective system. This view has penetrated the medical education literature, yet the idea of two independent dichotomous cognitive systems is not entirely without problems.This article outlines the difficulties of this "two-system view" and presents an alternative, developed by K.R. Hammond and colleagues, called cognitive continuum theory (CCT). CCT is featured by three key assumptions. First, human reasoning, problem solving, and decision making can be arranged on a cognitive continuum, with pure intuition at one end, pure analysis at the other, and a large middle ground called "quasirationality." Second, the nature and requirements of the cognitive task, as perceived by the person performing the task, determine to a large extent whether a task will be approached more intuitively or more analytically. Third, for optimal task performance, this approach needs to match the cognitive properties and requirements of the task. Finally, the author makes a case that CCT is better able than a two-system view to describe medical problem solving and clinical reasoning and that it provides clear clues for how to organize training in clinical reasoning.
A Complex Systems Investigation of Group Work Dynamics in L2 Interactive Tasks
ERIC Educational Resources Information Center
Poupore, Glen
2018-01-01
Working with Korean university-level learners of English, this study provides a detailed analytical comparison of 2 task work groups that were video-recorded, with 1 group scoring very high and the other relatively low based on the results of a Group Work Dynamic (GWD) measuring instrument. Adopting a complexity theory (CT) perspective and…
Does Time-on-Task Estimation Matter? Implications for the Validity of Learning Analytics Findings
ERIC Educational Resources Information Center
Kovanovic, Vitomir; Gaševic, Dragan; Dawson, Shane; Joksimovic, Srecko; Baker, Ryan S.; Hatala, Marek
2015-01-01
With\twidespread adoption of Learning Management Systems (LMS) and other learning technology, large amounts of data--commonly known as trace data--are readily accessible to researchers. Trace data has been extensively used to calculate time that students spend on different learning activities--typically referred to as time-on-task. These measures…
ERIC Educational Resources Information Center
Geisler, Cheryl
2018-01-01
Coding, the analytic task of assigning codes to nonnumeric data, is foundational to writing research. A rich discussion of methodological pluralism has established the foundational importance of systematicity in the task of coding, but less attention has been paid to the equally important commitment to language complexity. Addressing the interplay…
An Imperfect Dopaminergic Error Signal Can Drive Temporal-Difference Learning
Potjans, Wiebke; Diesmann, Markus; Morrison, Abigail
2011-01-01
An open problem in the field of computational neuroscience is how to link synaptic plasticity to system-level learning. A promising framework in this context is temporal-difference (TD) learning. Experimental evidence that supports the hypothesis that the mammalian brain performs temporal-difference learning includes the resemblance of the phasic activity of the midbrain dopaminergic neurons to the TD error and the discovery that cortico-striatal synaptic plasticity is modulated by dopamine. However, as the phasic dopaminergic signal does not reproduce all the properties of the theoretical TD error, it is unclear whether it is capable of driving behavior adaptation in complex tasks. Here, we present a spiking temporal-difference learning model based on the actor-critic architecture. The model dynamically generates a dopaminergic signal with realistic firing rates and exploits this signal to modulate the plasticity of synapses as a third factor. The predictions of our proposed plasticity dynamics are in good agreement with experimental results with respect to dopamine, pre- and post-synaptic activity. An analytical mapping from the parameters of our proposed plasticity dynamics to those of the classical discrete-time TD algorithm reveals that the biological constraints of the dopaminergic signal entail a modified TD algorithm with self-adapting learning parameters and an adapting offset. We show that the neuronal network is able to learn a task with sparse positive rewards as fast as the corresponding classical discrete-time TD algorithm. However, the performance of the neuronal network is impaired with respect to the traditional algorithm on a task with both positive and negative rewards and breaks down entirely on a task with purely negative rewards. Our model demonstrates that the asymmetry of a realistic dopaminergic signal enables TD learning when learning is driven by positive rewards but not when driven by negative rewards. PMID:21589888
Recognition and source memory as multivariate decision processes.
Banks, W P
2000-07-01
Recognition memory, source memory, and exclusion performance are three important domains of study in memory, each with its own findings, it specific theoretical developments, and its separate research literature. It is proposed here that results from all three domains can be treated with a single analytic model. This article shows how to generate a comprehensive memory representation based on multidimensional signal detection theory and how to make predictions for each of these paradigms using decision axes drawn through the space. The detection model is simpler than the comparable multinomial model, it is more easily generalizable, and it does not make threshold assumptions. An experiment using the same memory set for all three tasks demonstrates the analysis and tests the model. The results show that some seemingly complex relations between the paradigms derive from an underlying simplicity of structure.
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1982-01-01
Models, measures, and techniques for evaluating the effectiveness of aircraft computing systems were developed. By "effectiveness" in this context we mean the extent to which the user, i.e., a commercial air carrier, may expect to benefit from the computational tasks accomplished by a computing system in the environment of an advanced commercial aircraft. Thus, the concept of effectiveness involves aspects of system performance, reliability, and worth (value, benefit) which are appropriately integrated in the process of evaluating system effectiveness. Specifically, the primary objectives are: the development of system models that provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer.
B-dot algorithm steady-state motion performance
NASA Astrophysics Data System (ADS)
Ovchinnikov, M. Yu.; Roldugin, D. S.; Tkachev, S. S.; Penkov, V. I.
2018-05-01
Satellite attitude motion subject to the well-known B-dot magnetic control is considered. Unlike the majority of studies the present work focuses on the slowly rotating spacecraft. The attitude and the angular velocity acquired after detumbling the satellite is determined. This task is performed using two relatively simple geomagnetic field models. First the satellite is considered moving in the simplified dipole model. Asymptotically stable rotation around the axis of the maximum moment of inertia is found. This axis direction in the inertial space and the rotation rate are found. This result is then refined using the direct dipole geomagnetic field. Simple stable rotation transforms into the periodical motion, the rotation rate is also refined. Numerical analysis with the gravitational torque and the inclined dipole model verifies the analytical results.
NASA Technical Reports Server (NTRS)
Strelkov, S. A.; Sushkevich, T. A.
1983-01-01
Spatial frequency characteristics (SFC) and the scattering functions were studied in the two cases of a uniform horizontal layer with absolutely black bottom, and an isolated layer. The mathematical model for these examples describes the horizontal heterogeneities in a light field with regard to radiation polarization in a three dimensional planar atmosphere, delimited by a heterogeneous surface with diffuse reflection. The perturbation method was used to obtain vector transfer equations which correspond to the linear and nonlinear systems of polarization radiation transfer. The boundary value tasks for the vector transfer equation that is a parametric set and one dimensional are satisfied by the SFC of the nonlinear system, and are expressed through the SFC of linear approximation. As a consequence of the developed theory, formulas were obtained for analytical calculation of albedo in solving the task of dissemination of polarization radiation in the planetary atmosphere with uniform Lambert bottom.
Analytical study of electrical disconnect system for use on manned and unmanned missions
NASA Technical Reports Server (NTRS)
Rosener, A. A.; Lenda, J. A.; Trummer, R. O.
1976-01-01
The objective of this contract is to establish an optimum electrical disconnect system design(s) for use on manned and unmanned missions. The purpose of the disconnect system is to electrically mate and demate the spacecraft to subsystem module interfaces to accomplish orbital operations. The results of Task 1 and Task 2 of the effort are presented. Task 1 involves the definition of the functional, operational, and environmental requirements for the connector system to support the leading prototype candidate concepts. Task 2 involves the documentation review and survey of available existing connector designs.
Rethinking Visual Analytics for Streaming Data Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris
In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive, complex, incomplete, and uncertain in scenarios requiring human judgment.« less
Ellinas, Christos; Allan, Neil; Durugbo, Christopher; Johansson, Anders
2015-01-01
Current societal requirements necessitate the effective delivery of complex projects that can do more while using less. Yet, recent large-scale project failures suggest that our ability to successfully deliver them is still at its infancy. Such failures can be seen to arise through various failure mechanisms; this work focuses on one such mechanism. Specifically, it examines the likelihood of a project sustaining a large-scale catastrophe, as triggered by single task failure and delivered via a cascading process. To do so, an analytical model was developed and tested on an empirical dataset by the means of numerical simulation. This paper makes three main contributions. First, it provides a methodology to identify the tasks most capable of impacting a project. In doing so, it is noted that a significant number of tasks induce no cascades, while a handful are capable of triggering surprisingly large ones. Secondly, it illustrates that crude task characteristics cannot aid in identifying them, highlighting the complexity of the underlying process and the utility of this approach. Thirdly, it draws parallels with systems encountered within the natural sciences by noting the emergence of self-organised criticality, commonly found within natural systems. These findings strengthen the need to account for structural intricacies of a project's underlying task precedence structure as they can provide the conditions upon which large-scale catastrophes materialise.
A reliability analysis tool for SpaceWire network
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou
2017-04-01
A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.
SIMRAND I- SIMULATION OF RESEARCH AND DEVELOPMENT PROJECTS
NASA Technical Reports Server (NTRS)
Miles, R. F.
1994-01-01
The Simulation of Research and Development Projects program (SIMRAND) aids in the optimal allocation of R&D resources needed to achieve project goals. SIMRAND models the system subsets or project tasks as various network paths to a final goal. Each path is described in terms of task variables such as cost per hour, cost per unit, availability of resources, etc. Uncertainty is incorporated by treating task variables as probabilistic random variables. SIMRAND calculates the measure of preference for each alternative network. The networks yielding the highest utility function (or certainty equivalence) are then ranked as the optimal network paths. SIMRAND has been used in several economic potential studies at NASA's Jet Propulsion Laboratory involving solar dish power systems and photovoltaic array construction. However, any project having tasks which can be reduced to equations and related by measures of preference can be modeled. SIMRAND analysis consists of three phases: reduction, simulation, and evaluation. In the reduction phase, analytical techniques from probability theory and simulation techniques are used to reduce the complexity of the alternative networks. In the simulation phase, a Monte Carlo simulation is used to derive statistics on the variables of interest for each alternative network path. In the evaluation phase, the simulation statistics are compared and the networks are ranked in preference by a selected decision rule. The user must supply project subsystems in terms of equations based on variables (for example, parallel and series assembly line tasks in terms of number of items, cost factors, time limits, etc). The associated cumulative distribution functions and utility functions for each variable must also be provided (allowable upper and lower limits, group decision factors, etc). SIMRAND is written in Microsoft FORTRAN 77 for batch execution and has been implemented on an IBM PC series computer operating under DOS.
ERIC Educational Resources Information Center
Polito, Vincent A., Jr.
2010-01-01
The objective of this research was to explore the possibilities of identifying knowledge style factors that could be used as central elements of a professional business analyst's (PBA) performance attributes at work for those decision makers that use advanced analytical technologies on decision making tasks. Indicators of knowledge style were…
ERIC Educational Resources Information Center
Alderson, R. Matt; Rapport, Mark D.; Kofler, Michael J.
2007-01-01
Deficient behavioral inhibition (BI) processes are considered a core feature of attention deficit/hyperactivity disorder (ADHD). This meta-analytic review is the first to examine the potential influence of a wide range of subject and task variable moderator effects on BI processes--assessed by the stop-signal paradigm--in children with ADHD…
The Effects of Incentives on Workplace Performance: A Meta-Analytic Review of Research Studies
ERIC Educational Resources Information Center
Condly, Steven J.; Clark, Richard E.; Stolovitch, Harold D.
2003-01-01
A meta-analytic review of all adequately designed field and laboratory research on the use of incentives to motivate performance is reported. Of approximately 600 studies, 45 qualified. The overall average effect of all incentive programs in all work settings and on all work tasks was a 22% gain in performance. Team-directed incentives had a…
ERIC Educational Resources Information Center
Kryjevskaia, Mila; Stetzer, MacKenzie R.; Grosz, Nathaniel
2014-01-01
We have applied the heuristic-analytic theory of reasoning to interpret inconsistencies in student reasoning approaches to physics problems. This study was motivated by an emerging body of evidence that suggests that student conceptual and reasoning competence demonstrated on one task often fails to be exhibited on another. Indeed, even after…
Sciacovelli, Laura; Panteghini, Mauro; Lippi, Giuseppe; Sumarac, Zorica; Cadamuro, Janne; Galoro, César Alex De Olivera; Pino Castro, Isabel Garcia Del; Shcolnik, Wilson; Plebani, Mario
2017-08-28
The improving quality of laboratory testing requires a deep understanding of the many vulnerable steps involved in the total examination process (TEP), along with the identification of a hierarchy of risks and challenges that need to be addressed. From this perspective, the Working Group "Laboratory Errors and Patient Safety" (WG-LEPS) of International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) is focusing its activity on implementation of an efficient tool for obtaining meaningful information on the risk of errors developing throughout the TEP, and for establishing reliable information about error frequencies and their distribution. More recently, the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) has created the Task and Finish Group "Performance specifications for the extra-analytical phases" (TFG-PSEP) for defining performance specifications for extra-analytical phases. Both the IFCC and EFLM groups are working to provide laboratories with a system to evaluate their performances and recognize the critical aspects where improvement actions are needed. A Consensus Conference was organized in Padova, Italy, in 2016 in order to bring together all the experts and interested parties to achieve a consensus for effective harmonization of quality indicators (QIs). A general agreement was achieved and the main outcomes have been the release of a new version of model of quality indicators (MQI), the approval of a criterion for establishing performance specifications and the definition of the type of information that should be provided within the report to the clinical laboratories participating to the QIs project.
Rapport, Mark D; Alderson, R Matt; Kofler, Michael J; Sarver, Dustin E; Bolden, Jennifer; Sims, Valerie
2008-08-01
The current study investigated contradictory findings from recent experimental and meta-analytic studies concerning working memory deficits in ADHD. Working memory refers to the cognitive ability to temporarily store and mentally manipulate limited amounts of information for use in guiding behavior. Phonological (verbal) and visuospatial (nonverbal) working memory were assessed across four memory load conditions in 23 boys (12 ADHD, 11 typically developing) using tasks based on Baddeley's (Working memory, thought, and action, Oxford University Press, New York, 2007) working memory model. The model posits separate phonological and visuospatial storage and rehearsal components that are controlled by a single attentional controller (CE: central executive). A latent variable approach was used to partial task performance related to three variables of interest: phonological buffer/rehearsal loop, visuospatial buffer/rehearsal loop, and the CE attentional controller. ADHD-related working memory deficits were apparent across all three cognitive systems--with the largest magnitude of deficits apparent in the CE--even after controlling for reading speed, nonverbal visual encoding, age, IQ, and SES.
Integrating laboratory robots with analytical instruments--must it really be so difficult?
Kramer, G W
1990-09-01
Creating a reliable system from discrete laboratory instruments is often a task fraught with difficulties. While many modern analytical instruments are marvels of detection and data handling, attempts to create automated analytical systems incorporating such instruments are often frustrated by their human-oriented control structures and their egocentricity. The laboratory robot, while fully susceptible to these problems, extends such compatibility issues to the physical dimensions involving sample interchange, manipulation, and event timing. The workcell concept was conceived to describe the procedure and equipment necessary to carry out a single task during sample preparation. This notion can be extended to organize all operations in an automated system. Each workcell, no matter how complex its local repertoire of functions, must be minimally capable of accepting information (commands, data), returning information on demand (status, results), and being started, stopped, and reset by a higher level device. Even the system controller should have a mode where it can be directed by instructions from a higher level.
Analytic and heuristic processing influences on adolescent reasoning and decision-making.
Klaczynski, P A
2001-01-01
The normative/descriptive gap is the discrepancy between actual reasoning and traditional standards for reasoning. The relationship between age and the normative/descriptive gap was examined by presenting adolescents with a battery of reasoning and decision-making tasks. Middle adolescents (N = 76) performed closer to normative ideals than early adolescents (N = 66), although the normative/descriptive gap was large for both groups. Correlational analyses revealed that (1) normative responses correlated positively with each other, (2) nonnormative responses were positively interrelated, and (3) normative and nonnormative responses were largely independent. Factor analyses suggested that performance was based on two processing systems. The "analytic" system operates on "decontextualized" task representations and underlies conscious, computational reasoning. The "heuristic" system operates on "contextualized," content-laden representations and produces "cognitively cheap" responses that sometimes conflict with traditional norms. Analytic processing was more clearly linked to age and to intelligence than heuristic processing. Implications for cognitive development, the competence/performance issue, and rationality are discussed.
Robust reinforcement learning.
Morimoto, Jun; Doya, Kenji
2005-02-01
This letter proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both offline learning using simulations and for online action planning. However, the difference between the model and the real environment can lead to unpredictable, and often unwanted, results. Based on the theory of H(infinity) control, we consider a differential game in which a "disturbing" agent tries to make the worst possible disturbance while a "control" agent tries to make the best control input. The problem is formulated as finding a min-max solution of a value function that takes into account the amount of the reward and the norm of the disturbance. We derive online learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call robust reinforcement learning (RRL), on the control task of an inverted pendulum. In the linear domain, the policy and the value function learned by online algorithms coincided with those derived analytically by the linear H(infinity) control theory. For a fully nonlinear swing-up task, RRL achieved robust performance with changes in the pendulum weight and friction, while a standard reinforcement learning algorithm could not deal with these changes. We also applied RRL to the cart-pole swing-up task, and a robust swing-up policy was acquired.
García-Pérez, Miguel A.; Alcalá-Quintana, Rocío
2017-01-01
Psychophysical data from dual-presentation tasks are often collected with the two-alternative forced-choice (2AFC) response format, asking observers to guess when uncertain. For an analytical description of performance, psychometric functions are then fitted to data aggregated across the two orders/positions in which stimuli were presented. Yet, order effects make aggregated data uninterpretable, and the bias with which observers guess when uncertain precludes separating sensory from decisional components of performance. A ternary response format in which observers are also allowed to report indecision should fix these problems, but a comparative analysis with the 2AFC format has never been conducted. In addition, fitting ternary data separated by presentation order poses serious challenges. To address these issues, we extended the indecision model of psychophysical performance to accommodate the ternary, 2AFC, and same–different response formats in detection and discrimination tasks. Relevant issues for parameter estimation are also discussed along with simulation results that document the superiority of the ternary format. These advantages are demonstrated by fitting the indecision model to published detection and discrimination data collected with the ternary, 2AFC, or same–different formats, which had been analyzed differently in the sources. These examples also show that 2AFC data are unsuitable for testing certain types of hypotheses. matlab and R routines written for our purposes are available as Supplementary Material, which should help spread the use of the ternary format for dependable collection and interpretation of psychophysical data. PMID:28747893
Logic-Based Retrieval: Technology for Content-Oriented and Analytical Querying of Patent Data
NASA Astrophysics Data System (ADS)
Klampanos, Iraklis Angelos; Wu, Hengzhi; Roelleke, Thomas; Azzam, Hany
Patent searching is a complex retrieval task. An initial document search is only the starting point of a chain of searches and decisions that need to be made by patent searchers. Keyword-based retrieval is adequate for document searching, but it is not suitable for modelling comprehensive retrieval strategies. DB-like and logical approaches are the state-of-the-art techniques to model strategies, reasoning and decision making. In this paper we present the application of logical retrieval to patent searching. The two grand challenges are expressiveness and scalability, where high degree of expressiveness usually means a loss in scalability. In this paper we report how to maintain scalability while offering the expressiveness of logical retrieval required for solving patent search tasks. We present logical retrieval background, and how to model data-source selection and results' fusion. Moreover, we demonstrate the modelling of a retrieval strategy, a technique by which patent professionals are able to express, store and exchange their strategies and rationales when searching patents or when making decisions. An overview of the architecture and technical details complement the paper, while the evaluation reports preliminary results on how query processing times can be guaranteed, and how quality is affected by trading off responsiveness.
Schwarz-Christoffel Conformal Mapping based Grid Generation for Global Oceanic Circulation Models
NASA Astrophysics Data System (ADS)
Xu, Shiming
2015-04-01
We propose new grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithm are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the conventional grid design problem of pole relocation, it also addresses more advanced issues of computational efficiency and the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily 10 utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling when complex land-ocean distribution is present.
Optimization of the computational load of a hypercube supercomputer onboard a mobile robot.
Barhen, J; Toomarian, N; Protopopescu, V
1987-12-01
A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of singleneuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as prec xdence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.
Can Random Mutation Mimic Design?: A Guided Inquiry Laboratory for Undergraduate Students
Kalinowski, Steven T.; Taper, Mark L.; Metz, Anneke M.
2006-01-01
Complex biological structures, such as the human eye, have been interpreted as evidence for a creator for over three centuries. This raises the question of whether random mutation can create such adaptations. In this article, we present an inquiry-based laboratory experiment that explores this question using paper airplanes as a model organism. The main task for students in this investigation is to figure out how to simulate paper airplane evolution (including reproduction, inheritance, mutation, and selection). In addition, the lab requires students to practice analytic thinking and to carefully delineate the implications of their results. PMID:16951065
Optimization of the computational load of a hypercube supercomputer onboard a mobile robot
NASA Technical Reports Server (NTRS)
Barhen, Jacob; Toomarian, N.; Protopopescu, V.
1987-01-01
A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of single-neuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as precedence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.
Parallel solution of sparse one-dimensional dynamic programming problems
NASA Technical Reports Server (NTRS)
Nicol, David M.
1989-01-01
Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.
Radom, Marcin; Rybarczyk, Agnieszka; Szawulak, Bartlomiej; Andrzejewski, Hubert; Chabelski, Piotr; Kozak, Adam; Formanowicz, Piotr
2017-12-01
Model development and its analysis is a fundamental step in systems biology. The theory of Petri nets offers a tool for such a task. Since the rapid development of computer science, a variety of tools for Petri nets emerged, offering various analytical algorithms. From this follows a problem of using different programs to analyse a single model. Many file formats and different representations of results make the analysis much harder. Especially for larger nets the ability to visualize the results in a proper form provides a huge help in the understanding of their significance. We present a new tool for Petri nets development and analysis called Holmes. Our program contains algorithms for model analysis based on different types of Petri nets, e.g. invariant generator, Maximum Common Transitions (MCT) sets and cluster modules, simulation algorithms or knockout analysis tools. A very important feature is the ability to visualize the results of almost all analytical modules. The integration of such modules into one graphical environment allows a researcher to fully devote his or her time to the model building and analysis. Available at http://www.cs.put.poznan.pl/mradom/Holmes/holmes.html. piotr@cs.put.poznan.pl. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Cultural differences in attention: Eye movement evidence from a comparative visual search task.
Alotaibi, Albandri; Underwood, Geoffrey; Smith, Alastair D
2017-10-01
Individual differences in visual attention have been linked to thinking style: analytic thinking (common in individualistic cultures) is thought to promote attention to detail and focus on the most important part of a scene, whereas holistic thinking (common in collectivist cultures) promotes attention to the global structure of a scene and the relationship between its parts. However, this theory is primarily based on relatively simple judgement tasks. We compared groups from Great Britain (an individualist culture) and Saudi Arabia (a collectivist culture) on a more complex comparative visual search task, using simple natural scenes. A higher overall number of fixations for Saudi participants, along with longer search times, indicated less efficient search behaviour than British participants. Furthermore, intra-group comparisons of scan-path for Saudi participants revealed less similarity than within the British group. Together, these findings suggest that there is a positive relationship between an analytic cognitive style and controlled attention. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Lou, Yu-Chiung; Lin, Hsiao-Fang; Lin, Chin-Wen
2013-01-01
The aims of the study were (a) to develop a scale to measure university students' task value and (b) to use confirmatory factor analytic techniques to investigate the construct validity of the scale. The questionnaire items were developed based on theoretical considerations and the final version contained 38 items divided into 4 subscales.…
ERIC Educational Resources Information Center
Lockhart, Naorah C.
2017-01-01
Group counselors commonly collaborate in interdisciplinary settings in health care, substance abuse, and juvenile justice. Social network analysis is a methodology rarely used in counseling research yet has potential to examine task group dynamics in new ways. This case study explores the scholarly relationships among 36 members of an…
ERIC Educational Resources Information Center
Utami, Sri; Nursalam; Hargono, Rachmat; Susilaningrum, Rekawati
2016-01-01
The purpose of this study was to analyze the performance of midwives based on the task commitment. This was an observational analytic with cross sectional approach. Multistage random sampling was used to determine the public health center, proportional random sampling to selected participants. The samples were 222 midwives in the public health…
Reducing the framing effect in older and younger adults by encouraging analytic processing.
Thomas, Ayanna K; Millar, Peter R
2012-03-01
The present study explored whether the framing effect could be reduced in older and younger adults using techniques that influenced the accessibility of information relevant to the decision-making processing. Accessibility was manipulated indirectly in Experiment 1 by having participants engage in concurrent tasks, and directly in Experiment 2, through an instructions manipulation that required participants to maintain a goal of analytic processing throughout the experimental trial. We tested 120 older and 120 younger adults in Experiment 1. Participants completed 28 decision trials while concurrently either performing a probability calculation task or a memory task. In Experiment 2, we tested 136 older and 136 younger adults. Participants completed 48 decision trials after either having been instructed to "think like a scientist" or base decisions on "gut reactions." Results demonstrated that the framing effect was reduced in older and younger adults in the probability calculation task in Experiment 1 and under the "think like a scientist" instructions manipulation in Experiment 2. These results suggest that when information relevant to unbiased decision making was made more accessible, both older and younger adults were able to reduce susceptibility to the framing effect.
CM-DataONE: A Framework for collaborative analysis of climate model output
NASA Astrophysics Data System (ADS)
Xu, Hao; Bai, Yuqi; Li, Sha; Dong, Wenhao; Huang, Wenyu; Xu, Shiming; Lin, Yanluan; Wang, Bin
2015-04-01
CM-DataONE is a distributed collaborative analysis framework for climate model data which aims to break through the data access barriers of increasing file size and to accelerate research process. As data size involved in project such as the fifth Coupled Model Intercomparison Project (CMIP5) has reached petabytes, conventional methods for analysis and diagnosis of model outputs have been rather time-consuming and redundant. CM-DataONE is developed for data publishers and researchers from relevant areas. It can enable easy access to distributed data and provide extensible analysis functions based on tools such as NCAR Command Language, NetCDF Operators (NCO) and Climate Data Operators (CDO). CM-DataONE can be easily installed, configured, and maintained. The main web application has two separate parts which communicate with each other through APIs based on HTTP protocol. The analytic server is designed to be installed in each data node while a data portal can be configured anywhere and connect to a nearest node. Functions such as data query, analytic task submission, status monitoring, visualization and product downloading are provided to end users by data portal. Data conform to CMIP5 Model Output Format in each peer node can be scanned by the server and mapped to a global information database. A scheduler included in the server is responsible for task decomposition, distribution and consolidation. Analysis functions are always executed where data locate. Analysis function package included in the server has provided commonly used functions such as EOF analysis, trend analysis and time series. Functions are coupled with data by XML descriptions and can be easily extended. Various types of results can be obtained by users for further studies. This framework has significantly decreased the amount of data to be transmitted and improved efficiency in model intercomparison jobs by supporting online analysis and multi-node collaboration. To end users, data query is therefore accelerated and the size of data to be downloaded is reduced. Methodology can be easily shared among scientists, avoiding unnecessary replication. Currently, a prototype of CM-DataONE has been deployed on two data nodes of Tsinghua University.
Martian Atmospheric Modeling of Scale Factors for MarsGRAM 2005 and the MAVEN Project
NASA Technical Reports Server (NTRS)
McCullough, Chris
2011-01-01
For spacecraft missions to Mars, especially the navigation of Martian orbiters and landers, an extensive knowledge of the Martian atmosphere is extremely important. The generally-accepted NASA standard for modeling (MarsGRAM), which was developed at Marshall Space Flight Center. MarsGRAM is useful for task such as aerobraking, performance analysis and operations planning for aerobraking, entry descent and landing, and aerocapture. Unfortunately, the densities for the Martian atmosphere in MarsGRAM are based on table look-up and not on an analytical algorithm. Also, these values can vary drastically from the densities actually experienced by the spacecraft. This does not have much of an impact on simple integrations but drastically affects its usefulness in other applications, especially those in navigation. For example, the navigation team for the Mars Atmosphere Volatile Environment (MAVEN) Project uses MarsGRAM to target the desired atmospheric density for the orbiter's pariapse passage, its closet approach to the planet. After the satellite's passage through pariapsis the computed density is compared to the MarsGRAM model and a scale factor is assigned to the model to account for the difference. Therefore, large variations in the atmosphere from the model can cause unexpected deviations from the spacecraft's planned trajectory. In order to account for this, an analytic stochastic model of the scale factor's behavior is desired. The development of this model will allow for the MAVEN navigation team to determine the probability of various Martian atmospheric variations and their effects on the spacecraft.
Sentiment analysis in twitter data using data analytic techniques for predictive modelling
NASA Astrophysics Data System (ADS)
Razia Sulthana, A.; Jaithunbi, A. K.; Sai Ramesh, L.
2018-04-01
Sentiment analysis refers to the task of natural language processing to determine whether a piece of text contains subjective information and the kind of subjective information it expresses. The subjective information represents the attitude behind the text: positive, negative or neutral. Understanding the opinions behind user-generated content automatically is of great concern. We have made data analysis with huge amount of tweets taken as big data and thereby classifying the polarity of words, sentences or entire documents. We use linear regression for modelling the relationship between a scalar dependent variable Y and one or more explanatory variables (or independent variables) denoted X. We conduct a series of experiments to test the performance of the system.
Studies of HZE particle interactions and transport for space radiation protection purposes
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Wilson, John W.; Schimmerling, Walter; Wong, Mervyn
1987-01-01
The main emphasis is on developing general methods for accurately predicting high-energy heavy ion (HZE) particle interactions and transport for use by researchers in mission planning studies, in evaluating astronaut self-shielding factors, and in spacecraft shield design and optimization studies. The two research tasks are: (1) to develop computationally fast and accurate solutions to the Boltzmann (transport) equation; and (2) to develop accurate HZE interaction models, from fundamental physical considerations, for use as inputs into these transport codes. Accurate solutions to the HZE transport problem have been formulated through a combination of analytical and numerical techniques. In addition, theoretical models for the input interaction parameters are under development: stopping powers, nuclear absorption cross sections, and fragmentation parameters.
Comprehensive silicon solar cell computer modeling
NASA Technical Reports Server (NTRS)
Lamorte, M. F.
1984-01-01
The development of an efficient, comprehensive Si solar cell modeling program that has the capability of simulation accuracy of 5 percent or less is examined. A general investigation of computerized simulation is provided. Computer simulation programs are subdivided into a number of major tasks: (1) analytical method used to represent the physical system; (2) phenomena submodels that comprise the simulation of the system; (3) coding of the analysis and the phenomena submodels; (4) coding scheme that results in efficient use of the CPU so that CPU costs are low; and (5) modularized simulation program with respect to structures that may be analyzed, addition and/or modification of phenomena submodels as new experimental data become available, and the addition of other photovoltaic materials.
Carrieri, Arthur H; Copper, Jack; Owens, David J; Roese, Erik S; Bottiger, Jerold R; Everly, Robert D; Hung, Kevin C
2010-01-20
An active spectrophotopolarimeter sensor and support system were developed for a military/civilian defense feasibility study concerning the identification and standoff detection of biological aerosols. Plumes of warfare agent surrogates gamma-irradiated Bacillus subtilis and chicken egg white albumen (analytes), Arizona road dust (terrestrial interferent), water mist (atmospheric interferent), and talcum powders (experiment controls) were dispersed inside windowless chambers and interrogated by multiple CO(2) laser beams spanning 9.1-12.0 microm wavelengths (lambda). Molecular vibration and vibration-rotation activities by the subject analyte are fundamentally strong within this "fingerprint" middle infrared spectral region. Distinct polarization-modulations of incident irradiance and backscatter radiance of tuned beams generate the Mueller matrix (M) of subject aerosol. Strings of all 15 normalized elements {M(ij)(lambda)/M(11)(lambda)}, which completely describe physical and geometric attributes of the aerosol particles, are input fields for training hybrid Kohonen self-organizing map feed-forward artificial neural networks (ANNs). The properly trained and validated ANN model performs pattern recognition and type-classification tasks via internal mappings. A typical ANN that mathematically clusters analyte, interferent, and control aerosols with nil overlap of species is illustrated, including sensitivity analysis of performance.
NASA Technical Reports Server (NTRS)
Acton, W. H.; Crabtree, M. S.; Simons, J. C.; Gomer, F. E.; Eckel, J. S.
1983-01-01
Information theoretic analysis and subjective paired-comparison and task ranking techniques were employed in order to scale the workload of 20 communications-related tasks frequently performed by the captain and first officer of transport category aircraft. Tasks were drawn from taped conversations between aircraft and air traffic controllers (ATC). Twenty crewmembers performed subjective message comparisons and task rankings on the basis of workload. Information theoretic results indicated a broad range of task difficulty levels, and substantial differences between captain and first officer workload levels. Preliminary subjective data tended to corroborate these results. A hybrid scale reflecting the results of both the analytical and the subjective techniques is currently being developed. The findings will be used to select representative sets of communications for use in high fidelity simulation.
NASA Astrophysics Data System (ADS)
Kalantari, Faraz; Sen, Anando; Gifford, Howard C.
2014-03-01
SPECT imaging using In-111 ProstaScint is an FDA-approved method for diagnosing prostate cancer metastases within the pelvis. However, conventional medium-energy parallel-hole (MEPAR) collimators produce poor image quality and we are investigating the use of multipinhole (MPH) imaging as an alternative. This paper presents a method for evaluating MPH designs that makes use of sampling-sensitive (SS) mathematical model observers for tumor detectionlocalization tasks. Key to our approach is the redefinition of a normal (or background) reference image that is used with scanning model observers. We used this approach to compare different MPH configurations for the task of small-tumor detection in the prostate and surrounding lymph nodes. Four configurations used 10, 20, 30, and 60 pinholes evenly spaced over a complete circular orbit. A fixed-count acquisition protocol was assumed. Spherical tumors were placed within a digital anthropomorphic phantom having a realistic Prostascint biodistribution. Imaging data sets were generated with an analytical projector and reconstructed volumes were obtained with the OSEM algorithm. The MPH configurations were compared in a localization ROC (LROC) study with 2D pelvic images and both human and model observers. Regular and SS versions of the scanning channelized nonprewhitening (CNPW) and visual-search (VS) model observers were applied. The SS models demonstrated the highest correlations with the average human-observer results
Panos, Joseph A.; Hoffman, Joshua T.; Wordeman, Samuel C.; Hewett, Timothy E.
2016-01-01
Background Correction of neuromuscular impairments after anterior cruciate ligament injury is vital to successful return to sport. Frontal plane knee control during landing is a common measure of lower-extremity neuromuscular control and asymmetries in neuromuscular control of the knee can predispose injured athletes to additional injury and associated morbidities. Therefore, this study investigated the effects of anterior cruciate ligament injury on knee biomechanics during landing. Methods Two-dimensional frontal plane video of single leg drop, cross over drop, and drop vertical jump dynamic movement trials was analyzed for twenty injured and reconstructed athletes. The position of the knee joint center was tracked in ImageJ software for 500 milliseconds after landing to calculate medio-lateral knee motion velocities and determine normal fluency, the number of times per second knee velocity changed direction. The inverse of this calculation, analytical fluency, was used to associate larger numerical values with fluent movement. Findings Analytical fluency was decreased in involved limbs for single leg drop trials (P=0.0018). Importantly, analytical fluency for single leg drop differed compared to cross over drop trials for involved (P<0.001), but not uninvolved limbs (P=0.5029). For involved limbs, analytical fluency values exhibited a stepwise trend in relative magnitudes. Interpretation Decreased analytical fluency in involved limbs is consistent with previous studies. Fluency asymmetries observed during single leg drop tasks may be indicative of abhorrent landing strategies in the involved limb. Analytical fluency differences in unilateral tasks for injured limbs may represent neuromuscular impairment as a result of injury. PMID:26895446
Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics.
Stolper, Charles D; Perer, Adam; Gotz, David
2014-12-01
As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.
Analytical modeling of intumescent coating thermal protection system in a JP-5 fuel fire environment
NASA Technical Reports Server (NTRS)
Clark, K. J.; Shimizu, A. B.; Suchsland, K. E.; Moyer, C. B.
1974-01-01
The thermochemical response of Coating 313 when exposed to a fuel fire environment was studied to provide a tool for predicting the reaction time. The existing Aerotherm Charring Material Thermal Response and Ablation (CMA) computer program was modified to treat swelling materials. The modified code is now designated Aerotherm Transient Response of Intumescing Materials (TRIM) code. In addition, thermophysical property data for Coating 313 were analyzed and reduced for use in the TRIM code. An input data sensitivity study was performed, and performance tests of Coating 313/steel substrate models were carried out. The end product is a reliable computational model, the TRIM code, which was thoroughly validated for Coating 313. The tasks reported include: generation of input data, development of swell model and implementation in TRIM code, sensitivity study, acquisition of experimental data, comparisons of predictions with data, and predictions with intermediate insulation.
Box truss analysis and technology development. Task 1: Mesh analysis and control
NASA Technical Reports Server (NTRS)
Bachtell, E. E.; Bettadapur, S. S.; Coyner, J. V.
1985-01-01
An analytical tool was developed to model, analyze and predict RF performance of box truss antennas with reflective mesh surfaces. The analysis system is unique in that it integrates custom written programs for cord tied mesh surfaces, thereby drastically reducing the cost of analysis. The analysis system is capable of determining the RF performance of antennas under any type of manufacturing or operating environment by integrating together the various disciplines of design, finite element analysis, surface best fit analysis and RF analysis. The Integrated Mesh Analysis System consists of six separate programs: The Mesh Tie System Model Generator, The Loadcase Generator, The Model Optimizer, The Model Solver, The Surface Topography Solver and The RF Performance Solver. Additionally, a study using the mesh analysis system was performed to determine the effect of on orbit calibration, i.e., surface adjustment, on a typical box truss antenna.
Combined monitoring, decision and control model for the human operator in a command and control desk
NASA Technical Reports Server (NTRS)
Muralidharan, R.; Baron, S.
1978-01-01
A report is given on the ongoing efforts to mode the human operator in the context of the task during the enroute/return phases in the ground based control of multiple flights of remotely piloted vehicles (RPV). The approach employed here uses models that have their analytical bases in control theory and in statistical estimation and decision theory. In particular, it draws heavily on the modes and the concepts of the optimal control model (OCM) of the human operator. The OCM is being extended into a combined monitoring, decision, and control model (DEMON) of the human operator by infusing decision theoretic notions that make it suitable for application to problems in which human control actions are infrequent and in which monitoring and decision-making are the operator's main activities. Some results obtained with a specialized version of DEMON for the RPV control problem are included.
NASA Astrophysics Data System (ADS)
Stefanello, M. B.; Degrazia, G. A.; Mortarini, L.; Buligon, L.; Maldaner, S.; Carvalho, J. C.; Acevedo, O. C.; Martins, L. G. N.; Anfossi, D.; Buriol, C.; Roberti, D.
2018-02-01
Describing the effects of wind meandering motions on the dispersion of scalars is a challenging task, since this type of flow represents a physical state characterized by multiple scales. In this study, a Lagrangian stochastic diffusion model is derived to describe scalar transport during the horizontal wind meandering phenomenon that occurs within a planetary boundary layer. The model is derived from the linearization of the Langevin equation, and it employs a heuristic functional form that represents the autocorrelation function of meandering motion. The new solutions, which describe the longitudinal and lateral wind components, were used to simulate tracer experiments that were performed in low-wind speed conditions. The results of the comparison indicate that the new model can effectively reproduce the observed concentrations of the contaminants, and therefore, it can satisfactorily describe enhanced dispersion effects due to the presence of meandering.
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
Stajkovic, Alexander D; Lee, Dongseop; Nyberg, Anthony J
2009-05-01
The authors examined relationships among collective efficacy, group potency, and group performance. Meta-analytic results (based on 6,128 groups, 31,019 individuals, 118 correlations adjusted for dependence, and 96 studies) reveal that collective efficacy was significantly related to group performance (.35). In the proposed nested 2-level model, collective efficacy assessment (aggregation and group discussion) was tested as the 1st-level moderator. It showed significantly different average correlations with group performance (.32 vs. .45), but the group discussion assessment was homogeneous, whereas the aggregation assessment was heterogeneous. Consequently, there was no 2nd-level moderation for the group discussion, and heterogeneity in the aggregation group was accounted for by the 2nd-level moderator, task interdependence (high, moderate, and low levels were significant; the higher the level, the stronger the relationship). The 2nd and 3rd meta-analyses indicated that group potency was related to group performance (.29) and to collective efficacy (.65). When tested in a structural equation modeling analysis based on meta-analytic findings, collective efficacy fully mediated the relationship between group potency and group performance. The authors suggest future research and convert their findings to a probability of success index to help facilitate practice. (c) 2009 APA, all rights reserved.
NASA Technical Reports Server (NTRS)
Pandya, Abhilash; Maida, James; Hasson, Scott; Greenisen, Michael; Woolford, Barbara
1993-01-01
As manned exploration of space continues, analytical evaluation of human strength characteristics is critical. These extraterrestrial environments will spawn issues of human performance which will impact the designs of tools, work spaces, and space vehicles. Computer modeling is an effective method of correlating human biomechanical and anthropometric data with models of space structures and human work spaces. The aim of this study is to provide biomechanical data from isolated joints to be utilized in a computer modeling system for calculating torque resulting from any upper extremity motions: in this study, the ratchet wrench push-pull operation (a typical extravehicular activity task). Established here are mathematical relationships used to calculate maximum torque production of isolated upper extremity joints. These relationships are a function of joint angle and joint velocity.
ERIC Educational Resources Information Center
Liu, Phil D.; Chung, Kevin K. H.; McBride-Chang, Catherine; Tong, Xiuhong
2010-01-01
Among 30 Hong Kong Chinese fourth graders, sensitivities to character and word constructions were examined in judgment tasks at each level. There were three conditions across both tasks: the real condition, consisting of either actual two-character compound Chinese words or real Chinese compound characters; the reversed condition, with either the…
ERIC Educational Resources Information Center
Harrington, Clifford R., Comp.; Glock, Sandra, Comp.
Prepared by the Task Force on Rural Development Research (appointed by the U.S. Department of Agriculture), this analytical directory gives primary emphasis to 133 Rural Development 1 (RD1) research projects which were "active" projects between January 1 and June 30, 1973 in 13 Northeastern state agricultural experiment stations and the…
The European Network of Analytical and Experimental Laboratories for Geosciences
NASA Astrophysics Data System (ADS)
Freda, Carmela; Funiciello, Francesca; Meredith, Phil; Sagnotti, Leonardo; Scarlato, Piergiorgio; Troll, Valentin R.; Willingshofer, Ernst
2013-04-01
Integrating Earth Sciences infrastructures in Europe is the mission of the European Plate Observing System (EPOS).The integration of European analytical, experimental, and analogue laboratories plays a key role in this context and is the task of the EPOS Working Group 6 (WG6). Despite the presence in Europe of high performance infrastructures dedicated to geosciences, there is still limited collaboration in sharing facilities and best practices. The EPOS WG6 aims to overcome this limitation by pushing towards national and trans-national coordination, efficient use of current laboratory infrastructures, and future aggregation of facilities not yet included. This will be attained through the creation of common access and interoperability policies to foster and simplify personnel mobility. The EPOS ambition is to orchestrate European laboratory infrastructures with diverse, complementary tasks and competences into a single, but geographically distributed, infrastructure for rock physics, palaeomagnetism, analytical and experimental petrology and volcanology, and tectonic modeling. The WG6 is presently organizing its thematic core services within the EPOS distributed research infrastructure with the goal of joining the other EPOS communities (geologists, seismologists, volcanologists, etc...) and stakeholders (engineers, risk managers and other geosciences investigators) to: 1) develop tools and services to enhance visitor programs that will mutually benefit visitors and hosts (transnational access); 2) improve support and training activities to make facilities equally accessible to students, young researchers, and experienced users (training and dissemination); 3) collaborate in sharing technological and scientific know-how (transfer of knowledge); 4) optimize interoperability of distributed instrumentation by standardizing data collection, archive, and quality control standards (data preservation and interoperability); 5) implement a unified e-Infrastructure for data analysis, numerical modelling, and joint development and standardization of numerical tools (e-science implementation); 6) collect and store data in a flexible inventory database accessible within and beyond the Earth Sciences community(open access and outreach); 7) connect to environmental and hazard protection agencies, stakeholders, and public to raise consciousness of geo-hazards and geo-resources (innovation for society). We will inform scientists and industrial stakeholders on the most recent WG6 achievements in EPOS and we will show how our community is proceeding to design the thematic core services.
Exploring the Use of Computer Simulations in Unraveling Research and Development Governance Problems
NASA Technical Reports Server (NTRS)
Balaban, Mariusz A.; Hester, Patrick T.
2012-01-01
Understanding Research and Development (R&D) enterprise relationships and processes at a governance level is not a simple task, but valuable decision-making insight and evaluation capabilities can be gained from their exploration through computer simulations. This paper discusses current Modeling and Simulation (M&S) methods, addressing their applicability to R&D enterprise governance. Specifically, the authors analyze advantages and disadvantages of the four methodologies used most often by M&S practitioners: System Dynamics (SO), Discrete Event Simulation (DES), Agent Based Modeling (ABM), and formal Analytic Methods (AM) for modeling systems at the governance level. Moreover, the paper describes nesting models using a multi-method approach. Guidance is provided to those seeking to employ modeling techniques in an R&D enterprise for the purposes of understanding enterprise governance. Further, an example is modeled and explored for potential insight. The paper concludes with recommendations regarding opportunities for concentration of future work in modeling and simulating R&D governance relationships and processes.
Enabling the High Level Synthesis of Data Analytics Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minutoli, Marco; Castellana, Vito G.; Tumeo, Antonino
Conventional High Level Synthesis (HLS) tools mainly tar- get compute intensive kernels typical of digital signal pro- cessing applications. We are developing techniques and ar- chitectural templates to enable HLS of data analytics appli- cations. These applications are memory intensive, present fine-grained, unpredictable data accesses, and irregular, dy- namic task parallelism. We discuss an architectural tem- plate based around a distributed controller to efficiently ex- ploit thread level parallelism. We present a memory in- terface that supports parallel memory subsystems and en- ables implementing atomic memory operations. We intro- duce a dynamic task scheduling approach to efficiently ex- ecute heavilymore » unbalanced workload. The templates are val- idated by synthesizing queries from the Lehigh University Benchmark (LUBM), a well know SPARQL benchmark.« less
RLV Turbine Performance Optimization
NASA Technical Reports Server (NTRS)
Griffin, Lisa W.; Dorney, Daniel J.
2001-01-01
A task was developed at NASA/Marshall Space Flight Center (MSFC) to improve turbine aerodynamic performance through the application of advanced design and analysis tools. There are four major objectives of this task: 1) to develop, enhance, and integrate advanced turbine aerodynamic design and analysis tools; 2) to develop the methodology for application of the analytical techniques; 3) to demonstrate the benefits of the advanced turbine design procedure through its application to a relevant turbine design point; and 4) to verify the optimized design and analysis with testing. Final results of the preliminary design and the results of the two-dimensional (2D) detailed design of the first-stage vane of a supersonic turbine suitable for a reusable launch vehicle (R-LV) are presented. Analytical techniques for obtaining the results are also discussed.
Association of Individual Characteristics with Teleoperation Performance.
Pan, Dan; Zhang, Yijing; Li, Zhizhong; Tian, Zhiqiang
2016-09-01
A number of space activities (e.g., extravehicular astronaut rescue, cooperation in satellite services, space station supplies, and assembly) are implemented directly or assisted by remote robotic arms. Our study aimed to reveal those individual characteristics which could positively influence or even predict teleoperation performance of such a space robotic arm. There were 64 male volunteers without robot operation experience recruited for the study. Their individual characteristics were assessed, including spatial cognitive ability, cognitive style, and personality traits. The experimental tasks were three abstracted teleoperation tasks of a simulated space robotic arm: point aiming, line alignment, and obstacle avoidance. Teleoperation performance was measured from two aspects: task performance (completion time, extra distance moved, operation slips) and safety performance (collisions, joint limitations reached). The Pearson coefficients between individual characteristics and teleoperation performance were examined along with performance prediction models. It was found that the subjects with relatively high mental rotation ability or low neuroticism had both better task and safety performance (|r| = 0.212 ∼ 0.381). Subjects with relatively high perspective taking ability or high agreeableness had better task performance (r = -0.253; r = -0.249). Imagery subjects performed better than verbal subjects regarding both task and safety performance (|r| = 0.236 ∼ 0.290). Compared with analytic subjects, wholist subjects had better safety performance (r = 0.300). Additionally, extraverted subjects had better task performance (r = -0.259), but worse safety performance (r = 0.230). Those with high spatial cognitive ability, imagery and wholist cognitive style, low neuroticism, and high agreeableness were seen to have more advantages in working with the remote robotic arm. These results could be helpful to astronaut selection and training for space station missions. Pan D, Zhang Y, Li Z, Tian Z. Association of individual characteristics with teleoperation performance. Aerosp Med Hum Perform. 2016; 87(9):772-780.
Enterprise Systems Value-Based R&D Portfolio Analytics: Methods, Processes, and Tools
2014-01-14
Enterprise Systems Value-Based R&D Portfolio Analytics: Methods, Processes, and Tools Final Technical Report SERC -2014-TR-041-1 January 14...by the U.S. Department of Defense through the Systems Engineering Research Center ( SERC ) under Contract H98230-08-D-0171 (Task Order 0026, RT 51... SERC is a federally funded University Affiliated Research Center managed by Stevens Institute of Technology Any opinions, findings and
ERIC Educational Resources Information Center
Ramsey-Klee, Diane M.; Richman, Vivian
The purpose of this research is to develop content analytic techniques capable of extracting the differentiating information in narrative performance evaluations for enlisted personnel in order to aid in the process of selecting personnel for advancement, duty assignment, training, or quality retention. Four tasks were performed. The first task…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.; Kornreich, D.E.
Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less
The focusing effect of P-wave in the Moon's and Earth's low-velocity core. Analytical solution
NASA Astrophysics Data System (ADS)
Fatyanov, A. G.; Burmin, V. Yu
2018-04-01
The important aspect in the study of the structure of the interiors of planets is the question of the presence and state of core inside them. While for the Earth this task was solved long ago, the question of whether the core of the Moon is in a liquid or solid state up to the present is debatable up to present. If the core of the Moon is liquid, then the velocity of longitudinal waves in it should be lower than in the surrounding mantle. If the core is solid, then most likely, the velocity of longitudinal waves in it is higher than in the mantle. Numerical calculations of the wave field allow us to identify the criteria for drawing conclusions about the state of the lunar core. In this paper we consider the problem of constructing an analytical solution for wave fields in a layered sphere of arbitrary radius. A stable analytic solution is obtained for the wave fields of longitudinal waves in a three-layer sphere. Calculations of the total wave fields and rays for simplified models of the Earth and the Moon with real parameters are presented. The analytical solution and the ray pattern showed that the low-velocity cores of the Earth and the Moon possess the properties of a collecting lens. This leads to the emergence of a wave field focusing area. As a result, focused waves of considerable amplitude appear on the surface of the Earth and the Moon. In the Earth case, they appear before the first PKP-wave arrival. These are so-called "precursors", which continue in the subsequent arrivals of waves. At the same time, for the simplified model of the Earth, the maximum amplitude growth is observed in the 147-degree region. For the Moon model, the maximum amplitude growth is around 180°.
NASA Technical Reports Server (NTRS)
Gazda, Daniel B.; Schultz, John R.; Clarke, Mark S.
2007-01-01
Phase separation is one of the most significant obstacles encountered during the development of analytical methods for water quality monitoring in spacecraft environments. Removing air bubbles from water samples prior to analysis is a routine task on earth; however, in the absence of gravity, this routine task becomes extremely difficult. This paper details the development and initial ground testing of liquid metering centrifuge sticks (LMCS), devices designed to collect and meter a known volume of bubble-free water in microgravity. The LMCS uses centrifugal force to eliminate entrapped air and reproducibly meter liquid sample volumes for analysis with Colorimetric Solid Phase Extraction (C-SPE). C-SPE is a sorption-spectrophotometric platform that is being developed as a potential spacecraft water quality monitoring system. C-SPE utilizes solid phase extraction membranes impregnated with analyte-specific colorimetric reagents to concentrate and complex target analytes in spacecraft water samples. The mass of analyte extracted from the water sample is determined using diffuse reflectance (DR) data collected from the membrane surface and an analyte-specific calibration curve. The analyte concentration can then be calculated from the mass of extracted analyte and the volume of the sample analyzed. Previous flight experiments conducted in microgravity conditions aboard the NASA KC-135 aircraft demonstrated that the inability to collect and meter a known volume of water using a syringe was a limiting factor in the accuracy of C-SPE measurements. Herein, results obtained from ground based C-SPE experiments using ionic silver as a test analyte and either the LMCS or syringes for sample metering are compared to evaluate the performance of the LMCS. These results indicate very good agreement between the two sample metering methods and clearly illustrate the potential of utilizing centrifugal forces to achieve phase separation and metering of water samples in microgravity.
One-year test-retest reliability of intrinsic connectivity network fMRI in older adults
Guo, Cong C.; Kurth, Florian; Zhou, Juan; Mayer, Emeran A.; Eickhoff, Simon B; Kramer, Joel H.; Seeley, William W.
2014-01-01
“Resting-state” or task-free fMRI can assess intrinsic connectivity network (ICN) integrity in health and disease, suggesting a potential for use of these methods as disease-monitoring biomarkers. Numerous analytical options are available, including model-driven ROI-based correlation analysis and model-free, independent component analysis (ICA). High test-retest reliability will be a necessary feature of a successful ICN biomarker, yet available reliability data remains limited. Here, we examined ICN fMRI test-retest reliability in 24 healthy older subjects scanned roughly one year apart. We focused on the salience network, a disease-relevant ICN not previously subjected to reliability analysis. Most ICN analytical methods proved reliable (intraclass coefficients > 0.4) and could be further improved by wavelet analysis. Seed-based ROI correlation analysis showed high map-wise reliability, whereas graph theoretical measures and temporal concatenation group ICA produced the most reliable individual unit-wise outcomes. Including global signal regression in ROI-based correlation analyses reduced reliability. Our study provides a direct comparison between the most commonly used ICN fMRI methods and potential guidelines for measuring intrinsic connectivity in aging control and patient populations over time. PMID:22446491
Toward Accessing Spatial Structure from Building Information Models
NASA Astrophysics Data System (ADS)
Schultz, C.; Bhatt, M.
2011-08-01
Data about building designs and layouts is becoming increasingly more readily available. In the near future, service personal (such as maintenance staff or emergency rescue workers) arriving at a building site will have immediate real-time access to enormous amounts of data relating to structural properties, utilities, materials, temperature, and so on. The critical problem for users is the taxing and error prone task of interpreting such a large body of facts in order to extract salient information. This is necessary for comprehending a situation and deciding on a plan of action, and is a particularly serious issue in time-critical and safety-critical activities such as firefighting. Current unifying building models such as the Industry Foundation Classes (IFC), while being comprehensive, do not directly provide data structures that focus on spatial reasoning and spatial modalities that are required for high-level analytical tasks. The aim of the research presented in this paper is to provide computational tools for higher level querying and reasoning that shift the cognitive burden of dealing with enormous amounts of data away from the user. The user can then spend more energy and time in planning and decision making in order to accomplish the tasks at hand. We present an overview of our framework that provides users with an enhanced model of "built-up space". In order to test our approach using realistic design data (in terms of both scale and the nature of the building models) we describe how our system interfaces with IFC, and we conduct timing experiments to determine the practicality of our approach. We discuss general computational approaches for deriving higher-level spatial modalities by focusing on the example of route graphs. Finally, we present a firefighting scenario with alternative route graphs to motivate the application of our framework.
Durstewitz, Daniel
2017-06-01
The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties.
Composable Analytic Systems for next-generation intelligence analysis
NASA Astrophysics Data System (ADS)
DiBona, Phil; Llinas, James; Barry, Kevin
2015-05-01
Lockheed Martin Advanced Technology Laboratories (LM ATL) is collaborating with Professor James Llinas, Ph.D., of the Center for Multisource Information Fusion at the University at Buffalo (State of NY), researching concepts for a mixed-initiative associate system for intelligence analysts to facilitate reduced analysis and decision times while proactively discovering and presenting relevant information based on the analyst's needs, current tasks and cognitive state. Today's exploitation and analysis systems have largely been designed for a specific sensor, data type, and operational context, leading to difficulty in directly supporting the analyst's evolving tasking and work product development preferences across complex Operational Environments. Our interactions with analysts illuminate the need to impact the information fusion, exploitation, and analysis capabilities in a variety of ways, including understanding data options, algorithm composition, hypothesis validation, and work product development. Composable Analytic Systems, an analyst-driven system that increases flexibility and capability to effectively utilize Multi-INT fusion and analytics tailored to the analyst's mission needs, holds promise to addresses the current and future intelligence analysis needs, as US forces engage threats in contested and denied environments.
Development of biomechanical models for human factors evaluations
NASA Technical Reports Server (NTRS)
Woolford, Barbara; Pandya, Abhilash; Maida, James
1993-01-01
Computer aided design (CAD) techniques are now well established and have become the norm in many aspects of aerospace engineering. They enable analytical studies, such as finite element analysis, to be performed to measure performance characteristics of the aircraft or spacecraft long before a physical model is built. However, because of the complexity of human performance, CAD systems for human factors are not in widespread use. The purpose of such a program would be to analyze the performance capability of a crew member given a particular environment and task. This requires the design capabilities to describe the environment's geometry and to describe the task's requirements, which may involve motion and strength. This in turn requires extensive data on human physical performance which can be generalized to many different physical configurations. PLAID is developing into such a program. Begun at Johnson Space Center in 1977, it was started to model only the geometry of the environment. The physical appearance of a human body was generated, and the tool took on a new meaning as fit, access, and reach could be checked. Specification of fields-of-view soon followed. This allowed PLAID to be used to predict what the Space Shuttle cameras or crew could see from a given point.
Managing Large Scale Project Analysis Teams through a Web Accessible Database
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.
2008-01-01
Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.
Hagemeier, Nicholas E; Murawski, Matthew M
2014-02-12
To develop and validate an instrument to assess subjective ratings of the perceived value of various postgraduate training paths followed using expectancy-value as a theoretical framework; and to explore differences in value beliefs across type of postgraduate training pursued and type of pharmacy training completed prior to postgraduate training. A survey instrument was developed to sample 4 theoretical domains of subjective task value: intrinsic value, attainment value, utility value, and perceived cost. Retrospective self-report methodology was employed to examine respondents' (N=1,148) subjective task value beliefs specific to their highest level of postgraduate training completed. Exploratory and confirmatory factor analytic techniques were used to evaluate and validate value belief constructs. Intrinsic, attainment, utility, cost, and financial value constructs resulted from exploratory factor analysis. Cross-validation resulted in a 26-item instrument that demonstrated good model fit. Differences in value beliefs were noted across type of postgraduate training pursued and pharmacy training characteristics. The Postgraduate Training Value Instrument demonstrated evidence of reliability and construct validity. The survey instrument can be used to assess value beliefs regarding multiple postgraduate training options in pharmacy and potentially inform targeted recruiting of individuals to those paths best matching their own value beliefs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik
Scientists working in a particular domain often adhere to conventional data analysis and presentation methods and this leads to familiarity with these methods over time. But does high familiarity always lead to better analytical judgment? This question is especially relevant when visualizations are used in scientific tasks, as there can be discrepancies between visualization best practices and domain conventions. However, there is little empirical evidence of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their effect on scientific judgment. To address this gap and to study these factors, we focus on the climatemore » science domain, specifically on visualizations used for comparison of model performance. We present a comprehensive user study with 47 climate scientists where we explored the following factors: i) relationships between scientists’ familiarity, their perceived levels of com- fort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less
Approach of Decision Making Based on the Analytic Hierarchy Process for Urban Landscape Management
NASA Astrophysics Data System (ADS)
Srdjevic, Zorica; Lakicevic, Milena; Srdjevic, Bojan
2013-03-01
This paper proposes a two-stage group decision making approach to urban landscape management and planning supported by the analytic hierarchy process. The proposed approach combines an application of the consensus convergence model and the weighted geometric mean method. The application of the proposed approach is shown on a real urban landscape planning problem with a park-forest in Belgrade, Serbia. Decision makers were policy makers, i.e., representatives of several key national and municipal institutions, and experts coming from different scientific fields. As a result, the most suitable management plan from the set of plans is recognized. It includes both native vegetation renewal in degraded areas of park-forest and continued maintenance of its dominant tourism function. Decision makers included in this research consider the approach to be transparent and useful for addressing landscape management tasks. The central idea of this paper can be understood in a broader sense and easily applied to other decision making problems in various scientific fields.
How input fluctuations reshape the dynamics of a biological switching system
NASA Astrophysics Data System (ADS)
Hu, Bo; Kessler, David A.; Rappel, Wouter-Jan; Levine, Herbert
2012-12-01
An important task in quantitative biology is to understand the role of stochasticity in biochemical regulation. Here, as an extension of our recent work [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.148101 107, 148101 (2011)], we study how input fluctuations affect the stochastic dynamics of a simple biological switch. In our model, the on transition rate of the switch is directly regulated by a noisy input signal, which is described as a non-negative mean-reverting diffusion process. This continuous process can be a good approximation of the discrete birth-death process and is much more analytically tractable. Within this setup, we apply the Feynman-Kac theorem to investigate the statistical features of the output switching dynamics. Consistent with our previous findings, the input noise is found to effectively suppress the input-dependent transitions. We show analytically that this effect becomes significant when the input signal fluctuates greatly in amplitude and reverts slowly to its mean.
Approach of decision making based on the analytic hierarchy process for urban landscape management.
Srdjevic, Zorica; Lakicevic, Milena; Srdjevic, Bojan
2013-03-01
This paper proposes a two-stage group decision making approach to urban landscape management and planning supported by the analytic hierarchy process. The proposed approach combines an application of the consensus convergence model and the weighted geometric mean method. The application of the proposed approach is shown on a real urban landscape planning problem with a park-forest in Belgrade, Serbia. Decision makers were policy makers, i.e., representatives of several key national and municipal institutions, and experts coming from different scientific fields. As a result, the most suitable management plan from the set of plans is recognized. It includes both native vegetation renewal in degraded areas of park-forest and continued maintenance of its dominant tourism function. Decision makers included in this research consider the approach to be transparent and useful for addressing landscape management tasks. The central idea of this paper can be understood in a broader sense and easily applied to other decision making problems in various scientific fields.
Space Storable Propellant Performance Gas/Liquid Like-Doublet Injector Characterization
NASA Technical Reports Server (NTRS)
Falk, A. Y.
1972-01-01
A 30-month applied research program was conducted, encompassing an analytical, design, and experimental effort to relate injector design parameters to simultaneous attainment of high performance and component (injector/thrust chamber) compatibility for gas/liquid space-storable propellants. The gas/liquid propellant combination selected for study was FLOX (82.6% F2)/ambient temperature gaseous methane. The injector pattern characterized was the like-(self)-impinging doublet. Program effort was apportioned into four basic technical tasks: injector and thrust chamber design, injector and thrust chamber fabrication, performance evaluation testing, and data evaluation and reporting. Analytical parametric combustion analyses and cold flow distribution and atomization experiments were conducted with injector segment models to support design of injector/thrust chamber combinations for hot fire evaluation. Hot fire tests were conducted to: (1) optimize performance of the injector core elements, and (2) provide design criteria for the outer zone elements so that injector/thrust chamber compatibility could be achieved with only minimal performance losses.
Scalability Analysis and Use of Compression at the Goddard DAAC and End-to-End MODIS Transfers
NASA Technical Reports Server (NTRS)
Menasce, Daniel A.
1998-01-01
The goal of this task is to analyze the performance of single and multiple FTP transfer between SCF's and the Goddard DAAC. We developed an analytic model to compute the performance of FTP sessions as a function of various key parameters, implemented the model as a program called FTP Analyzer, and carried out validations with real data obtained by running single and multiple FTP transfer between GSFC and the Miami SCF. The input parameters to the model include the mix to FTP sessions (scenario), and for each FTP session, the file size. The network parameters include the round trip time, packet loss rate, the limiting bandwidth of the network connecting the SCF to a DAAC, TCP's basic timeout, TCP's Maximum Segment Size, and TCP's Maximum Receiver's Window Size. The modeling approach used consisted of modeling TCP's overall throughput, computing TCP's delay per FTP transfer, and then solving a queuing network model that includes the FTP clients and servers.
Build-up Approach to Updating the Mock Quiet Spike(TradeMark) Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike(TradeMark) (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented in order to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike(TradeMark) project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.
Build-up Approach to Updating the Mock Quiet Spike(TM)Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. The NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.
a New Model for Fuzzy Personalized Route Planning Using Fuzzy Linguistic Preference Relation
NASA Astrophysics Data System (ADS)
Nadi, S.; Houshyaripour, A. H.
2017-09-01
This paper proposes a new model for personalized route planning under uncertain condition. Personalized routing, involves different sources of uncertainty. These uncertainties can be raised from user's ambiguity about their preferences, imprecise criteria values and modelling process. The proposed model uses Fuzzy Linguistic Preference Relation Analytical Hierarchical Process (FLPRAHP) to analyse user's preferences under uncertainty. Routing is a multi-criteria task especially in transportation networks, where the users wish to optimize their routes based on different criteria. However, due to the lake of knowledge about the preferences of different users and uncertainties available in the criteria values, we propose a new personalized fuzzy routing method based on the fuzzy ranking using center of gravity. The model employed FLPRAHP method to aggregate uncertain criteria values regarding uncertain user's preferences while improve consistency with least possible comparisons. An illustrative example presents the effectiveness and capability of the proposed model to calculate best personalize route under fuzziness and uncertainty.
Cognitive strategies in the mental rotation task revealed by EEG spectral power.
Gardony, Aaron L; Eddy, Marianna D; Brunyé, Tad T; Taylor, Holly A
2017-11-01
The classic mental rotation task (MRT; Shepard & Metzler, 1971) is commonly thought to measure mental rotation, a cognitive process involving covert simulation of motor rotation. Yet much research suggests that the MRT recruits both motor simulation and other analytic cognitive strategies that depend on visuospatial representation and visual working memory (WM). In the present study, we investigated cognitive strategies in the MRT using time-frequency analysis of EEG and independent component analysis. We scrutinized sensorimotor mu (µ) power reduction, associated with motor simulation, parietal alpha (pα) power reduction, associated with visuospatial representation, and frontal midline theta (fmθ) power enhancement, associated with WM maintenance and manipulation. µ power increased concomitant with increasing task difficulty, suggesting reduced use of motor simulation, while pα decreased and fmθ power increased, suggesting heightened use of visuospatial representation processing and WM, respectively. These findings suggest that MRT performance involves flexibly trading off between cognitive strategies, namely a motor simulation-based mental rotation strategy and WM-intensive analytic strategies based on task difficulty. Flexible cognitive strategy use may be a domain-general cognitive principle that underlies aptitude and spatial intelligence in a variety of cognitive domains. We close with discussion of the present study's implications as well as future directions. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Stocks, Dana R.
1986-01-01
The Dynamic Gas Temperature Measurement System compensation software accepts digitized data from two different diameter thermocouples and computes a compensated frequency response spectrum for one of the thermocouples. Detailed discussions of the physical system, analytical model, and computer software are presented in this volume and in Volume 1 of this report under Task 3. Computer program software restrictions and test cases are also presented. Compensated and uncompensated data may be presented in either the time or frequency domain. Time domain data are presented as instantaneous temperature vs time. Frequency domain data may be presented in several forms such as power spectral density vs frequency.
Some big ideas for some big problems.
Winter, D D
2000-05-01
Although most psychologists do not see sustainability as a psychological problem, our environmental predicament is caused largely by human behaviors, accompanied by relevant thoughts, feelings, attitudes, and values. The huge task of building sustainable cultures will require a great many psychologists from a variety of backgrounds. In an effort to stimulate the imaginations of a wide spectrum of psychologists to take on the crucial problem of sustainability, this article discusses 4 psychological approaches (neo-analytic, behavioral, social, and cognitive) and outlines some of their insights into environmentally relevant behavior. These models are useful for illuminating ways to increase environmentally responsible behaviors of clients, communities, and professional associations.
An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System.
Barone, Sandro; Carulli, Marina; Neri, Paolo; Paoli, Alessandro; Razionale, Armando Viviano
2018-01-31
The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera.
Aston, Elizabeth R.; Metrik, Jane; Amlung, Michael; Kahler, Christopher W.; MacKillop, James
2016-01-01
Background Distinct behavioral economic domains, including high perceived drug value (demand) and delay discounting (DD), have been implicated in the initiation of drug use and the progression to dependence. However, it is unclear whether frequent marijuana users conform to a “reinforcer pathology” addiction model wherein marijuana demand and DD jointly increase risk for problematic marijuana use and cannabis dependence (CD). Methods Participants (n=88, 34% female, 14% cannabis dependent) completed a marijuana purchase task at baseline. A delay discounting task was completed following placebo marijuana cigarette (0% THC) administration during a separate experimental session. Results Marijuana demand and DD were quantified using area under the curve (AUC). In multiple regression models, demand uniquely predicted frequency of marijuana use while DD did not. In contrast, DD uniquely predicted CD symptom count while demand did not. There were no significant interactions between demand and DD in either model. Conclusions These findings suggest that frequent marijuana users exhibit key constituents of the reinforcer pathology model: high marijuana demand and steep discounting of delayed rewards. However, demand and DD appear to be independent rather than synergistic risk factors for elevated marijuana use and risk for progression to CD. Findings also provide support for using AUC as a singular marijuana demand metric, particularly when also examining other behavioral economic constructs that apply similar statistical approaches, such as DD, to support analytic methodological convergence. PMID:27810657
An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System
Barone, Sandro; Carulli, Marina; Razionale, Armando Viviano
2018-01-01
The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera. PMID:29385051
McMorris, Terry; Sproule, John; Turner, Anthony; Hale, Beverley J
2011-03-01
The purpose of this study was to compare, using meta-analytic techniques, the effect of acute, intermediate intensity exercise on the speed and accuracy of performance of working memory tasks. It was hypothesized that acute, intermediate intensity exercise would have a significant beneficial effect on response time and that effect sizes for response time and accuracy data would differ significantly. Random-effects meta-analysis showed a significant, beneficial effect size for response time, g=-1.41 (p<0.001) but a significant detrimental effect size, g=0.40 (p<0.01), for accuracy. There was a significant difference between effect sizes (Z(diff)=3.85, p<0.001). It was concluded that acute, intermediate intensity exercise has a strong beneficial effect on speed of response in working memory tasks but a low to moderate, detrimental one on accuracy. There was no support for a speed-accuracy trade-off. It was argued that exercise-induced increases in brain concentrations of catecholamines result in faster processing but increases in neural noise may negatively affect accuracy. 2010 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent
2017-01-01
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Task-based data-acquisition optimization for sparse image reconstruction systems
NASA Astrophysics Data System (ADS)
Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.
2017-03-01
Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.
Unified Modeling Language (UML) for hospital-based cancer registration processes.
Shiki, Naomi; Ohno, Yuko; Fujii, Ayumi; Murata, Taizo; Matsumura, Yasushi
2008-01-01
Hospital-based cancer registry involves complex processing steps that span across multiple departments. In addition, management techniques and registration procedures differ depending on each medical facility. Establishing processes for hospital-based cancer registry requires clarifying specific functions and labor needed. In recent years, the business modeling technique, in which management evaluation is done by clearly spelling out processes and functions, has been applied to business process analysis. However, there are few analytical reports describing the applications of these concepts to medical-related work. In this study, we initially sought to model hospital-based cancer registration processes using the Unified Modeling Language (UML), to clarify functions. The object of this study was the cancer registry of Osaka University Hospital. We organized the hospital-based cancer registration processes based on interview and observational surveys, and produced an As-Is model using activity, use-case, and class diagrams. After drafting every UML model, it was fed-back to practitioners to check its validity and improved. We were able to define the workflow for each department using activity diagrams. In addition, by using use-case diagrams we were able to classify each department within the hospital as a system, and thereby specify the core processes and staff that were responsible for each department. The class diagrams were effective in systematically organizing the information to be used for hospital-based cancer registries. Using UML modeling, hospital-based cancer registration processes were broadly classified into three separate processes, namely, registration tasks, quality control, and filing data. An additional 14 functions were also extracted. Many tasks take place within the hospital-based cancer registry office, but the process of providing information spans across multiple departments. Moreover, additional tasks were required in comparison to using a standardized system because the hospital-based cancer registration system was constructed with the pre-existing computer system in Osaka University Hospital. Difficulty of utilization of useful information for cancer registration processes was shown to increase the task workload. By using UML, we were able to clarify functions and extract the typical processes for a hospital-based cancer registry. Modeling can provide a basis of process analysis for establishment of efficient hospital-based cancer registration processes in each institute.
Homework setting in cognitive behavioral therapy: A study of discursive strategies.
Beckwith, Andrew; Crichton, Jonathan
2014-01-01
In recent years cognitive behavioral therapy (CBT), a form ofpsychotherapy, has risen to prominence due to a large number of studies attesting to its efficacy. A crucial part of the model of CBT is the use of the therapeutic strategy, homework, in which the client undertakes therapeutic tasks between sessions. The focus of this study is on how homework is implemented in sessions of CBT. This is undertaken through an analysis utilizing theme-orientated discourse analysis of video recorded sessions of CBT of one therapist and a client. Through tracking the focal theme of homework, the analysis focuses on homework as a face-threatening act (Brown and Levinson 1987) and how discursive strategies are employed to manage this issue. Other analytic themes include the use of frames (Goffman 1974) and constructed dialogue (Tannen 2007). It is the expertise of the therapist in putting into practice the therapeutic task of homework that is the subject of this study.
Experimental Equipment Design and Fabrication Study for Delta-G Experiment
NASA Technical Reports Server (NTRS)
1997-01-01
The Research Machine Shop at UAH did not develop any new technology in the performance of the following tasks. All tasks were performed as specified.UAH RMS shall design and fabricate a "poor" model of a silicon-carbide high-temperature crucible with dimensions of 8 inches in diameter and 4 inches high-temperature crucible for pouring liquid ceramic materials at 1200 C into molds from heating ovens. The crucible shall also be designed with a manipulation fixture to facilitate holding and pouring of the heated liquid material. UAH RMS shall investigate the availability of 400 Hz, high-current (65 volts @ 100 amperes) power systems for use in high-speed rotating disk experiments, UAH RMS shall investigate, develop a methodology, and experiment on the application of filament-wound carbon fibers to the periphery of ceramic superconductors to withstand high levels of rotational g-forces. UAH RMS shall provide analytical data to verify the resulting improved disc with carbon composite fibers.
Propulsion requirements for communications satellites.
NASA Technical Reports Server (NTRS)
Isley, W. C.; Duck, K. I.
1972-01-01
The concept of characteristics thrust is introduced herein as a means of classifying propulsion system tasks related particularly to geosynchronous communications spacecraft. Approximate analytical models are developed to permit estimation of characteristic thrust for injection error corrections, orbit angle re-location, north-south station keeping, east-west station keeping, spin axis precession control, attitude rate damping, and orbit raising applications. Performance assessment factors are then outlined in terms of characteristic power, characteristic weight, and characteristic volume envelope, which are related to the characteristic thrust. Finally, selected performance curves are shown for power as a function of spacecraft weight, including the influence of duty cycle on north-south station keeping, a 90 degree orbit angle re-location in 14 days, and finally comparison of orbit raising tasks from low and intermediate orbits to a final geosynchronous station. Power requirements range from less than 75 watts for north-south station keeping on small payloads up to greater than 15 KW for a 180 day orbit raising mission including a 28.5 degree plane change.
Proposed qualification requirements for selected railroad jobs
DOT National Transportation Integrated Search
1975-01-01
This report proposes minimum safety-related knowledge, performance and training requirements for the jobs of railroad engineer, conductor, brakeman and train dispatchers. Analyses performed were primarily based upon job and task analytic documentatio...
Design of impact-resistant boron/aluminum large fan blade
NASA Technical Reports Server (NTRS)
Salemme, C. T.; Yokel, S. A.
1978-01-01
The technical program was comprised of two technical tasks. Task 1 encompassed the preliminary boron/aluminum fan blade design effort. Two preliminary designs were evolved. An initial design consisted of 32 blades per stage and was based on material properties extracted from manufactured blades. A final design of 36 blades per stage was based on rule-of-mixture material properties. In Task 2, the selected preliminary blade design was refined via more sophisticated analytical tools. Detailed finite element stress analysis and aero performance analysis were carried out to determine blade material frequencies and directional stresses.
A framework for feature extraction from hospital medical data with applications in risk prediction.
Tran, Truyen; Luo, Wei; Phung, Dinh; Gupta, Sunil; Rana, Santu; Kennedy, Richard Lee; Larkins, Ann; Venkatesh, Svetha
2014-12-30
Feature engineering is a time consuming component of predictive modeling. We propose a versatile platform to automatically extract features for risk prediction, based on a pre-defined and extensible entity schema. The extraction is independent of disease type or risk prediction task. We contrast auto-extracted features to baselines generated from the Elixhauser comorbidities. Hospital medical records was transformed to event sequences, to which filters were applied to extract feature sets capturing diversity in temporal scales and data types. The features were evaluated on a readmission prediction task, comparing with baseline feature sets generated from the Elixhauser comorbidities. The prediction model was through logistic regression with elastic net regularization. Predictions horizons of 1, 2, 3, 6, 12 months were considered for four diverse diseases: diabetes, COPD, mental disorders and pneumonia, with derivation and validation cohorts defined on non-overlapping data-collection periods. For unplanned readmissions, auto-extracted feature set using socio-demographic information and medical records, outperformed baselines derived from the socio-demographic information and Elixhauser comorbidities, over 20 settings (5 prediction horizons over 4 diseases). In particular over 30-day prediction, the AUCs are: COPD-baseline: 0.60 (95% CI: 0.57, 0.63), auto-extracted: 0.67 (0.64, 0.70); diabetes-baseline: 0.60 (0.58, 0.63), auto-extracted: 0.67 (0.64, 0.69); mental disorders-baseline: 0.57 (0.54, 0.60), auto-extracted: 0.69 (0.64,0.70); pneumonia-baseline: 0.61 (0.59, 0.63), auto-extracted: 0.70 (0.67, 0.72). The advantages of auto-extracted standard features from complex medical records, in a disease and task agnostic manner were demonstrated. Auto-extracted features have good predictive power over multiple time horizons. Such feature sets have potential to form the foundation of complex automated analytic tasks.
Endorsement of formal leaders: an integrative model.
Michener, H A; Lawler, E J
1975-02-01
This experiment develops an integrative, path-analytic model for the endorsement accorded formal leaders. The model contains four independent variables reflecting aspects of group structure (i.e., group success-failure, the payoff distribution, the degree of support by others members for the leader, and the vulnerability of the leader). Also included are two intervening variables reflecting perceptual processes (attributed competence and attributed fairness), and one dependent variable endorsement). The results indicate that endorsement is greater when the group's success is high, when the payoff distribution is flat rather than hierarchial, and when the leader is not vulnerable to removal from office. Other support had no significant impact on endorsement. Analyses further demonstrate that the effect of success-failure on endorsement is mediated by attributed competence, while the effect of the payoff distributed is mediated by attributed fairness. These results suggest that moral and task evaluations are distinct bases of endorsement.
The Ophidia framework: toward cloud-based data analytics for climate change
NASA Astrophysics Data System (ADS)
Fiore, Sandro; D'Anca, Alessandro; Elia, Donatello; Mancini, Marco; Mariello, Andrea; Mirto, Maria; Palazzo, Cosimo; Aloisio, Giovanni
2015-04-01
The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in the climate change domain. It provides parallel (server-side) data analysis, an internal storage model and a hierarchical data organization to manage large amount of multidimensional scientific data. The Ophidia analytics platform provides several MPI-based parallel operators to manipulate large datasets (data cubes) and array-based primitives to perform data analysis on large arrays of scientific data. The most relevant data analytics use cases implemented in national and international projects target fire danger prevention (OFIDIA), interactions between climate change and biodiversity (EUBrazilCC), climate indicators and remote data analysis (CLIP-C), sea situational awareness (TESSA), large scale data analytics on CMIP5 data in NetCDF format, Climate and Forecast (CF) convention compliant (ExArch). Two use cases regarding the EU FP7 EUBrazil Cloud Connect and the INTERREG OFIDIA projects will be presented during the talk. In the former case (EUBrazilCC) the Ophidia framework is being extended to integrate scalable VM-based solutions for the management of large volumes of scientific data (both climate and satellite data) in a cloud-based environment to study how climate change affects biodiversity. In the latter one (OFIDIA) the data analytics framework is being exploited to provide operational support regarding processing chains devoted to fire danger prevention. To tackle the project challenges, data analytics workflows consisting of about 130 operators perform, among the others, parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, import/export of datasets in NetCDF format. Finally, the entire Ophidia software stack has been deployed at CMCC on 24-nodes (16-cores/node) of the Athena HPC cluster. Moreover, a cloud-based release tested with OpenNebula is also available and running in the private cloud infrastructure of the CMCC Supercomputing Centre.
Flexibility in data interpretation: effects of representational format.
Braithwaite, David W; Goldstone, Robert L
2013-01-01
Graphs and tables differentially support performance on specific tasks. For tasks requiring reading off single data points, tables are as good as or better than graphs, while for tasks involving relationships among data points, graphs often yield better performance. However, the degree to which graphs and tables support flexibility across a range of tasks is not well-understood. In two experiments, participants detected main and interaction effects in line graphs and tables of bivariate data. Graphs led to more efficient performance, but also lower flexibility, as indicated by a larger discrepancy in performance across tasks. In particular, detection of main effects of variables represented in the graph legend was facilitated relative to detection of main effects of variables represented in the x-axis. Graphs may be a preferable representational format when the desired task or analytical perspective is known in advance, but may also induce greater interpretive bias than tables, necessitating greater care in their use and design.
Saleem, Muniba; Barlett, Christopher P; Anderson, Craig A; Hawkins, Ian
2017-04-01
The Tangram Help/Hurt Task is a laboratory-based measure designed to simultaneously assess helpful and hurtful behavior. Across five studies we provide evidence that further establishes the convergent and discriminant validity of the Tangram Help/Hurt Task. Cross-sectional and meta-analytic evidence finds consistently significant associations between helpful and hurtful scores on the Tangram Task and prosocial and aggressive personality traits. Experimental evidence reveals that situational primes known to induce aggressive and prosocial behavior significantly influence helpful and hurtful scores on the Tangram Help/Hurt Task. Additionally, motivation items in all studies indicate that tangram choices are indeed associated with intent of helping and hurting. We discuss the advantages and limitations of the Tangram Help/Hurt Task relative to established measures of helpful and hurtful behavior. Aggr. Behav. 43:133-146, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models
NASA Astrophysics Data System (ADS)
Xu, S.; Wang, B.; Liu, J.
2015-02-01
In this article we propose two conformal mapping based grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithms are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the basic grid design problem of pole relocation, these new algorithms also address more advanced issues such as smoothed scaling factor, or the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling where complex land-ocean distribution is present.
Probabilistic Reinforcement Learning in Adults with Autism Spectrum Disorders
Solomon, Marjorie; Smith, Anne C.; Frank, Michael J.; Ly, Stanford; Carter, Cameron S.
2017-01-01
Background Autism spectrum disorders (ASDs) can be conceptualized as disorders of learning, however there have been few experimental studies taking this perspective. Methods We examined the probabilistic reinforcement learning performance of 28 adults with ASDs and 30 typically developing adults on a task requiring learning relationships between three stimulus pairs consisting of Japanese characters with feedback that was valid with different probabilities (80%, 70%, and 60%). Both univariate and Bayesian state–space data analytic methods were employed. Hypotheses were based on the extant literature as well as on neurobiological and computational models of reinforcement learning. Results Both groups learned the task after training. However, there were group differences in early learning in the first task block where individuals with ASDs acquired the most frequently accurately reinforced stimulus pair (80%) comparably to typically developing individuals; exhibited poorer acquisition of the less frequently reinforced 70% pair as assessed by state–space learning curves; and outperformed typically developing individuals on the near chance (60%) pair. Individuals with ASDs also demonstrated deficits in using positive feedback to exploit rewarded choices. Conclusions Results support the contention that individuals with ASDs are slower learners. Based on neurobiology and on the results of computational modeling, one interpretation of this pattern of findings is that impairments are related to deficits in flexible updating of reinforcement history as mediated by the orbito-frontal cortex, with spared functioning of the basal ganglia. This hypothesis about the pathophysiology of learning in ASDs can be tested using functional magnetic resonance imaging. PMID:21425243
Ceriotti, Ferruccio; Fernandez-Calle, Pilar; Klee, George G; Nordin, Gunnar; Sandberg, Sverre; Streichert, Thomas; Vives-Corrons, Joan-Lluis; Panteghini, Mauro
2017-02-01
This paper, prepared by the EFLM Task and Finish Group on Allocation of laboratory tests to different models for performance specifications (TFG-DM), is dealing with criteria for allocating measurands to the different models for analytical performance specifications (APS) recognized in the 1st EFLM Strategic Conference Consensus Statement. Model 1, based on the effect of APS on clinical outcome, is the model of choice for measurands that have a central role in the decision-making of a specific disease or clinical situation and where cut-off/decision limits are established for either diagnosing, screening or monitoring. Total cholesterol, glucose, HbA1c, serum albumin and cardiac troponins represent practical examples. Model 2 is based on components of biological variation and should be applied to measurands that do not have a central role in a specific disease or clinical situation, but where the concentration of the measurand is in a steady state. This is best achieved for measurands under strict homeostatic control in order to preserve their concentrations in the body fluid of interest, but it can also be applied to other measurands that are in a steady state in biological fluids. In this case, it is expected that the "noise" produced by the measurement procedure will not significantly alter the signal provided by the concentration of the measurand. This model especially applies to electrolytes and minerals in blood plasma (sodium, potassium, chloride, bicarbonate, calcium, magnesium, inorganic phosphate) and to creatinine, cystatin C, uric acid and total protein in plasma. Model 3, based on state-of-the-art of the measurement, should be used for all the measurands that cannot be included in models 1 or 2.
Multibody modeling and verification
NASA Technical Reports Server (NTRS)
Wiens, Gloria J.
1989-01-01
A summary of a ten week project on flexible multibody modeling, verification and control is presented. Emphasis was on the need for experimental verification. A literature survey was conducted for gathering information on the existence of experimental work related to flexible multibody systems. The first portion of the assigned task encompassed the modeling aspects of flexible multibodies that can undergo large angular displacements. Research in the area of modeling aspects were also surveyed, with special attention given to the component mode approach. Resulting from this is a research plan on various modeling aspects to be investigated over the next year. The relationship between the large angular displacements, boundary conditions, mode selection, and system modes is of particular interest. The other portion of the assigned task was the generation of a test plan for experimental verification of analytical and/or computer analysis techniques used for flexible multibody systems. Based on current and expected frequency ranges of flexible multibody systems to be used in space applications, an initial test article was selected and designed. A preliminary TREETOPS computer analysis was run to ensure frequency content in the low frequency range, 0.1 to 50 Hz. The initial specifications of experimental measurement and instrumentation components were also generated. Resulting from this effort is the initial multi-phase plan for a Ground Test Facility of Flexible Multibody Systems for Modeling Verification and Control. The plan focusses on the Multibody Modeling and Verification (MMV) Laboratory. General requirements of the Unobtrusive Sensor and Effector (USE) and the Robot Enhancement (RE) laboratories were considered during the laboratory development.
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Task-Driven Orbit Design and Implementation on a Robotic C-Arm System for Cone-Beam CT.
Ouadah, S; Jacobson, M; Stayman, J W; Ehtiati, T; Weiss, C; Siewerdsen, J H
2017-03-01
This work applies task-driven optimization to the design of non-circular orbits that maximize imaging performance for a particular imaging task. First implementation of task-driven imaging on a clinical robotic C-arm system is demonstrated, and a framework for orbit calculation is described and evaluated. We implemented a task-driven imaging framework to optimize orbit parameters that maximize detectability index d '. This framework utilizes a specified Fourier domain task function and an analytical model for system spatial resolution and noise. Two experiments were conducted to test the framework. First, a simple task was considered consisting of frequencies lying entirely on the f z -axis (e.g., discrimination of structures oriented parallel to the central axial plane), and a "circle + arc" orbit was incorporated into the framework as a means to improve sampling of these frequencies, and thereby increase task-based detectability. The orbit was implemented on a robotic C-arm (Artis Zeego, Siemens Healthcare). A second task considered visualization of a cochlear implant simulated within a head phantom, with spatial frequency response emphasizing high-frequency content in the ( f y , f z ) plane of the cochlea. An optimal orbit was computed using the task-driven framework, and the resulting image was compared to that for a circular orbit. For the f z -axis task, the circle + arc orbit was shown to increase d ' by a factor of 1.20, with an improvement of 0.71 mm in a 3D edge-spread measurement for edges located far from the central plane and a decrease in streak artifacts compared to a circular orbit. For the cochlear implant task, the resulting orbit favored complementary views of high tilt angles in a 360° orbit, and d ' was increased by a factor of 1.83. This work shows that a prospective definition of imaging task can be used to optimize source-detector orbit and improve imaging performance. The method was implemented for execution of non-circular, task-driven orbits on a clinical robotic C-arm system. The framework is sufficiently general to include both acquisition parameters (e.g., orbit, kV, and mA selection) and reconstruction parameters (e.g., a spatially varying regularizer).
Task-driven orbit design and implementation on a robotic C-arm system for cone-beam CT
NASA Astrophysics Data System (ADS)
Ouadah, S.; Jacobson, M.; Stayman, J. W.; Ehtiati, T.; Weiss, C.; Siewerdsen, J. H.
2017-03-01
Purpose: This work applies task-driven optimization to the design of non-circular orbits that maximize imaging performance for a particular imaging task. First implementation of task-driven imaging on a clinical robotic C-arm system is demonstrated, and a framework for orbit calculation is described and evaluated. Methods: We implemented a task-driven imaging framework to optimize orbit parameters that maximize detectability index d'. This framework utilizes a specified Fourier domain task function and an analytical model for system spatial resolution and noise. Two experiments were conducted to test the framework. First, a simple task was considered consisting of frequencies lying entirely on the fz-axis (e.g., discrimination of structures oriented parallel to the central axial plane), and a "circle + arc" orbit was incorporated into the framework as a means to improve sampling of these frequencies, and thereby increase task-based detectability. The orbit was implemented on a robotic C-arm (Artis Zeego, Siemens Healthcare). A second task considered visualization of a cochlear implant simulated within a head phantom, with spatial frequency response emphasizing high-frequency content in the (fy, fz) plane of the cochlea. An optimal orbit was computed using the task-driven framework, and the resulting image was compared to that for a circular orbit. Results: For the fz-axis task, the circle + arc orbit was shown to increase d' by a factor of 1.20, with an improvement of 0.71 mm in a 3D edge-spread measurement for edges located far from the central plane and a decrease in streak artifacts compared to a circular orbit. For the cochlear implant task, the resulting orbit favored complementary views of high tilt angles in a 360° orbit, and d' was increased by a factor of 1.83. Conclusions: This work shows that a prospective definition of imaging task can be used to optimize source-detector orbit and improve imaging performance. The method was implemented for execution of non-circular, task-driven orbits on a clinical robotic C-arm system. The framework is sufficiently general to include both acquisition parameters (e.g., orbit, kV, and mA selection) and reconstruction parameters (e.g., a spatially varying regularizer).
Liquid hydrogen turbopump rapid start program. [thermal preconditioning using coatings
NASA Technical Reports Server (NTRS)
Wong, G. S.
1973-01-01
This program was to analyze, test, and evaluate methods of achieving rapid-start of a liquid hydrogen feed system (inlet duct and turbopump) using a minimum of thermal preconditioning time and propellant. The program was divided into four tasks. Task 1 includes analytical studies of the testing conducted in the other three tasks. Task 2 describes the results from laboratory testing of coating samples and the successful adherence of a KX-635 coating to the internal surfaces of the feed system tested in Task 4. Task 3 presents results of testing an uncoated feed system. Tank pressure was varied to determine the effect of flowrate on preconditioning. The discharge volume and the discharge pressure which initiates opening of the discharge valve were varied to determine the effect on deadhead (no through-flow) start transients. Task 4 describes results of testing a similar, internally coated feed system and illustrates the savings in preconditioning time and propellant resulting from the coatings.
Propagation Modeling and Analysis of Molecular Motors in Molecular Communication.
Chahibi, Youssef; Akyildiz, Ian F; Balasingham, Ilangko
2016-12-01
Molecular motor networks (MMNs) are networks constructed from molecular motors to enable nanomachines to perform coordinated tasks of sensing, computing, and actuation at the nano- and micro- scales. Living cells are naturally enabled with this same mechanism to establish point-to-point communication between different locations inside the cell. Similar to a railway system, the cytoplasm contains an intricate infrastructure of tracks, named microtubules, interconnecting different internal components of the cell. Motor proteins, such as kinesin and dynein, are able to travel along these tracks directionally, carrying with them large molecules that would otherwise be unreliably transported across the cytoplasm using free diffusion. Molecular communication has been previously proposed for the design and study of MMNs. However, the topological aspects of MMNs, including the effects of branches, have been ignored in the existing studies. In this paper, a physical end-to-end model for MMNs is developed, considering the location of the transmitter node, the network topology, and the receiver nodes. The end-to-end gain and group delay are considered as the performance measures, and analytical expressions for them are derived. The analytical model is validated by Monte-Carlo simulations and the performance of MMNs is analyzed numerically. It is shown that, depending on their nature and position, MMN nodes create impedance effects that are critical for the overall performance. This model could be applied to assist the design of artificial MMNs and to study cargo transport in neurofilaments to elucidate brain diseases related to microtubule jamming.
Panteghini, Mauro; Ceriotti, Ferruccio; Jones, Graham; Oosterhuis, Wytze; Plebani, Mario; Sandberg, Sverre
2017-10-26
Measurements in clinical laboratories produce results needed in the diagnosis and monitoring of patients. These results are always characterized by some uncertainty. What quality is needed and what measurement errors can be tolerated without jeopardizing patient safety should therefore be defined and specified for each analyte having clinical use. When these specifications are defined, the total examination process will be "fit for purpose" and the laboratory professionals should then set up rules to control the measuring systems to ensure they perform within specifications. The laboratory community has used different models to set performance specifications (PS). Recently, it was felt that there was a need to revisit different models and, at the same time, to emphasize the presuppositions for using the different models. Therefore, in 2014 the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) organized a Strategic Conference in Milan. It was felt that there was a need for more detailed discussions on, for instance, PS for EQAS, which measurands should use which models to set PS and how to set PS for the extra-analytical phases. There was also a need to critically evaluate the quality of data on biological variation studies and further discussing the use of the total error (TE) concept. Consequently, EFLM established five Task Finish Groups (TFGs) to address each of these topics. The TFGs are finishing their activity on 2017 and the content of this paper includes deliverables from these groups.
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
Stakeholder perspectives on decision-analytic modeling frameworks to assess genetic services policy.
Guzauskas, Gregory F; Garrison, Louis P; Stock, Jacquie; Au, Sylvia; Doyle, Debra Lochner; Veenstra, David L
2013-01-01
Genetic services policymakers and insurers often make coverage decisions in the absence of complete evidence of clinical utility and under budget constraints. We evaluated genetic services stakeholder opinions on the potential usefulness of decision-analytic modeling to inform coverage decisions, and asked them to identify genetic tests for decision-analytic modeling studies. We presented an overview of decision-analytic modeling to members of the Western States Genetic Services Collaborative Reimbursement Work Group and state Medicaid representatives and conducted directed content analysis and an anonymous survey to gauge their attitudes toward decision-analytic modeling. Participants also identified and prioritized genetic services for prospective decision-analytic evaluation. Participants expressed dissatisfaction with current processes for evaluating insurance coverage of genetic services. Some participants expressed uncertainty about their comprehension of decision-analytic modeling techniques. All stakeholders reported openness to using decision-analytic modeling for genetic services assessments. Participants were most interested in application of decision-analytic concepts to multiple-disorder testing platforms, such as next-generation sequencing and chromosomal microarray. Decision-analytic modeling approaches may provide a useful decision tool to genetic services stakeholders and Medicaid decision-makers.
NASA Technical Reports Server (NTRS)
Baaklini, George Y.; Smith, Kevin; Raulerson, David; Gyekenyesi, Andrew L.; Sawicki, Jerzy T.; Brasche, Lisa
2003-01-01
Tools for Engine Diagnostics is a major task in the Propulsion System Health Management area of the Single Aircraft Accident Prevention project under NASA s Aviation Safety Program. The major goal of the Aviation Safety Program is to reduce fatal aircraft accidents by 80 percent within 10 years and by 90 percent within 25 years. The goal of the Propulsion System Health Management area is to eliminate propulsion system malfunctions as a primary or contributing factor to the cause of aircraft accidents. The purpose of Tools for Engine Diagnostics, a 2-yr-old task, is to establish and improve tools for engine diagnostics and prognostics that measure the deformation and damage of rotating engine components at the ground level and that perform intermittent or continuous monitoring on the engine wing. In this work, nondestructive-evaluation- (NDE-) based technology is combined with model-dependent disk spin experimental simulation systems, like finite element modeling (FEM) and modal norms, to monitor and predict rotor damage in real time. Fracture mechanics time-dependent fatigue crack growth and damage-mechanics-based life estimation are being developed, and their potential use investigated. In addition, wireless eddy current and advanced acoustics are being developed for on-wing and just-in-time NDE engine inspection to provide deeper access and higher sensitivity to extend on-wing capabilities and improve inspection readiness. In the long run, these methods could establish a base for prognostic sensing while an engine is running, without any overt actions, like inspections. This damage-detection strategy includes experimentally acquired vibration-, eddy-current- and capacitance-based displacement measurements and analytically computed FEM-, modal norms-, and conventional rotordynamics-based models of well-defined damages and critical mass imbalances in rotating disks and rotors.
2004-06-15
therefore should be the subject of additional due diligence . Independent Task Force on Terrorist Financing 47 • Cooperate fully with any multilateral...unusually talented group who brought enthusiasm, diligence , and strong analytical skills to this project. We are especially appreciative of the hard...State Department’s latest Patterns of Global Terrorism report, continue to give praise where praise is due but too often go to lengths to avoid
High-frequency phase shift measurement greatly enhances the sensitivity of QCM immunosensors.
March, Carmen; García, José V; Sánchez, Ángel; Arnau, Antonio; Jiménez, Yolanda; García, Pablo; Manclús, Juan J; Montoya, Ángel
2015-03-15
In spite of being widely used for in liquid biosensing applications, sensitivity improvement of conventional (5-20MHz) quartz crystal microbalance (QCM) sensors remains an unsolved challenging task. With the help of a new electronic characterization approach based on phase change measurements at a constant fixed frequency, a highly sensitive and versatile high fundamental frequency (HFF) QCM immunosensor has successfully been developed and tested for its use in pesticide (carbaryl and thiabendazole) analysis. The analytical performance of several immunosensors was compared in competitive immunoassays taking carbaryl insecticide as the model analyte. The highest sensitivity was exhibited by the 100MHz HFF-QCM carbaryl immunosensor. When results were compared with those reported for 9MHz QCM, analytical parameters clearly showed an improvement of one order of magnitude for sensitivity (estimated as the I50 value) and two orders of magnitude for the limit of detection (LOD): 30μgl(-1) vs 0.66μgL(-1)I50 value and 11μgL(-1) vs 0.14μgL(-1) LOD, for 9 and 100MHz, respectively. For the fungicide thiabendazole, I50 value was roughly the same as that previously reported for SPR under the same biochemical conditions, whereas LOD improved by a factor of 2. The analytical performance achieved by high frequency QCM immunosensors surpassed those of conventional QCM and SPR, closely approaching the most sensitive ELISAs. The developed 100MHz QCM immunosensor strongly improves sensitivity in biosensing, and therefore can be considered as a very promising new analytical tool for in liquid applications where highly sensitive detection is required. Copyright © 2014 Elsevier B.V. All rights reserved.
Rurkhamet, Busagarin; Nanthavanij, Suebsak
2004-12-01
One important factor that leads to the development of musculoskeletal disorders (MSD) and cumulative trauma disorders (CTD) among visual display terminal (VDT) users is their work posture. While operating a VDT, a user's body posture is strongly influenced by the task, VDT workstation settings, and layout of computer accessories. This paper presents an analytic and rule-based decision support tool called EQ-DeX (an ergonomics and quantitative design expert system) that is developed to provide valid and practical recommendations regarding the adjustment of a VDT workstation and the arrangement of computer accessories. The paper explains the structure and components of EQ-DeX, input data, rules, and adjustment and arrangement algorithms. From input information such as gender, age, body height, task, etc., EQ-DeX uses analytic and rule-based algorithms to estimate quantitative settings of a computer table and a chair, as well as locations of computer accessories such as monitor, document holder, keyboard, and mouse. With the input and output screens that are designed using the concept of usability, the interactions between the user and EQ-DeX are convenient. Examples are also presented to demonstrate the recommendations generated by EQ-DeX.
On μe-scattering at NNLO in QED
NASA Astrophysics Data System (ADS)
Mastrolia, P.; Passera, M.; Primo, A.; Schubert, U.; Torres Bobadilla, W. J.
2018-05-01
We report on the current status of the analytic evaluation of the two-loop corrections to the μescattering in Quantum Electrodynamics, presenting state-of-the art techniques which have been developed to address this challenging task.
Video game training does not enhance cognitive ability: A comprehensive meta-analytic investigation.
Sala, Giovanni; Tatlidil, K Semir; Gobet, Fernand
2018-02-01
As a result of considerable potential scientific and societal implications, the possibility of enhancing cognitive ability by training has been one of the most influential topics of cognitive psychology in the last two decades. However, substantial research into the psychology of expertise and a recent series of meta-analytic reviews have suggested that various types of cognitive training (e.g., working memory training) benefit performance only in the trained tasks. The lack of skill generalization from one domain to different ones-that is, far transfer-has been documented in various fields of research such as working memory training, music, brain training, and chess. Video game training is another activity that has been claimed by many researchers to foster a broad range of cognitive abilities such as visual processing, attention, spatial ability, and cognitive control. We tested these claims with three random-effects meta-analytic models. The first meta-analysis (k = 310) examined the correlation between video game skill and cognitive ability. The second meta-analysis (k = 315) dealt with the differences between video game players and nonplayers in cognitive ability. The third meta-analysis (k = 359) investigated the effects of video game training on participants' cognitive ability. Small or null overall effect sizes were found in all three models. These outcomes show that overall cognitive ability and video game skill are only weakly related. Importantly, we found no evidence of a causal relationship between playing video games and enhanced cognitive ability. Video game training thus represents no exception to the general difficulty of obtaining far transfer. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
NASA Technical Reports Server (NTRS)
Ng, Y. S.; Lee, J. H.
1989-01-01
The Superfluid Helium On-Orbit Transfer Flight Experiment (SHOOT) is designed to demonstrate the techniques and components required for orbital superfluid (He II) replenishment of observatories and satellites. One of the tasks planned in the experiment is to cool a warm cryogen tank and a warm transfer line to liquid helium temperature. A math model, based on single-phase vapor flow heat transfer, has been developed to predict the cooldown time, component temperature histories, and helium consumption rate, for various initial conditions of the components and for the thermomechanical pump heater powers of 2 W and 0.5 W. This paper discusses the model and the analytical results, which can be used for planning the experiment operations and determining the pump heater power required for the cooldown operation.
Photogrammetry Applied to Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Cattafesta, L. N., III; Radeztsky, R. H.; Burner, A. W.
2000-01-01
In image-based measurements, quantitative image data must be mapped to three-dimensional object space. Analytical photogrammetric methods, which may be used to accomplish this task, are discussed from the viewpoint of experimental fluid dynamicists. The Direct Linear Transformation (DLT) for camera calibration, used in pressure sensitive paint, is summarized. An optimization method for camera calibration is developed that can be used to determine the camera calibration parameters, including those describing lens distortion, from a single image. Combined with the DLT method, this method allows a rapid and comprehensive in-situ camera calibration and therefore is particularly useful for quantitative flow visualization and other measurements such as model attitude and deformation in production wind tunnels. The paper also includes a brief description of typical photogrammetric applications to temperature- and pressure-sensitive paint measurements and model deformation measurements in wind tunnels.
Optimal Micropatterns in 2D Transport Networks and Their Relation to Image Inpainting
NASA Astrophysics Data System (ADS)
Brancolini, Alessio; Rossmanith, Carolin; Wirth, Benedikt
2018-04-01
We consider two different variational models of transport networks: the so-called branched transport problem and the urban planning problem. Based on a novel relation to Mumford-Shah image inpainting and techniques developed in that field, we show for a two-dimensional situation that both highly non-convex network optimization tasks can be transformed into a convex variational problem, which may be very useful from analytical and numerical perspectives. As applications of the convex formulation, we use it to perform numerical simulations (to our knowledge this is the first numerical treatment of urban planning), and we prove a lower bound for the network cost that matches a known upper bound (in terms of how the cost scales in the model parameters) which helps better understand optimal networks and their minimal costs.
Optimization of Thermal Object Nonlinear Control Systems by Energy Efficiency Criterion.
NASA Astrophysics Data System (ADS)
Velichkin, Vladimir A.; Zavyalov, Vladimir A.
2018-03-01
This article presents the results of thermal object functioning control analysis (heat exchanger, dryer, heat treatment chamber, etc.). The results were used to determine a mathematical model of the generalized thermal control object. The appropriate optimality criterion was chosen to make the control more energy-efficient. The mathematical programming task was formulated based on the chosen optimality criterion, control object mathematical model and technological constraints. The “maximum energy efficiency” criterion helped avoid solving a system of nonlinear differential equations and solve the formulated problem of mathematical programming in an analytical way. It should be noted that in the case under review the search for optimal control and optimal trajectory reduces to solving an algebraic system of equations. In addition, it is shown that the optimal trajectory does not depend on the dynamic characteristics of the control object.
Fusing Social Media and Mobile Analytics for Urban Sense-Making
2017-05-09
AFRL-AFOSR-JP-TR-2017-0037 Fusing Social Media and Mobile Analytics for Urban Sense-Making Archan Misra SINGAPORE MANAGEMENT UNIVERSITY Final Report...CONTRACT NUMBER 5b. GRANT NUMBER FA2386-14-1-0002 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S) Archan Misra 5d. PROJECT NUMBER 5e. TASK...NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) SINGAPORE MANAGEMENT UNIVERSITY 81 VICTORIA STREET SINGAPORE, 188065 SG 8
ANALYTiC: An Active Learning System for Trajectory Classification.
Soares Junior, Amilcar; Renso, Chiara; Matwin, Stan
2017-01-01
The increasing availability and use of positioning devices has resulted in large volumes of trajectory data. However, semantic annotations for such data are typically added by domain experts, which is a time-consuming task. Machine-learning algorithms can help infer semantic annotations from trajectory data by learning from sets of labeled data. Specifically, active learning approaches can minimize the set of trajectories to be annotated while preserving good performance measures. The ANALYTiC web-based interactive tool visually guides users through this annotation process.
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1990-01-01
The design, implementation, and empirical evaluation of task-analytic models and intelligent aids for operators in the control of complex dynamic systems, specifically aerospace systems, are studied. Three related activities are included: (1) the models of operator decision making in complex and predominantly automated space systems were used and developed; (2) the Operator Function Model (OFM) was used to represent operator activities; and (3) Operator Function Model Expert System (OFMspert), a stand-alone knowledge-based system was developed, that interacts with a human operator in a manner similar to a human assistant in the control of aerospace systems. OFMspert is an architecture for an operator's assistant that uses the OFM as its system and operator knowledge base and a blackboard paradigm of problem solving to dynamically generate expectations about upcoming operator activities and interpreting actual operator actions. An experiment validated the OFMspert's intent inferencing capability and showed that it inferred the intentions of operators in ways comparable to both a human expert and operators themselves. OFMspert was also augmented with control capabilities. An interface allowed the operator to interact with OFMspert, delegating as much or as little control responsibility as the operator chose. With its design based on the OFM, OFMspert's control capabilities were available at multiple levels of abstraction and allowed the operator a great deal of discretion over the amount and level of delegated control. An experiment showed that overall system performance was comparable for teams consisting of two human operators versus a human operator and OFMspert team.
Electroencephalographic monitoring of complex mental tasks
NASA Technical Reports Server (NTRS)
Guisado, Raul; Montgomery, Richard; Montgomery, Leslie; Hickey, Chris
1992-01-01
Outlined here is the development of neurophysiological procedures to monitor operators during the performance of cognitive tasks. Our approach included the use of electroencepalographic (EEG) and rheoencephalographic (REG) techniques to determine changes in cortical function associated with cognition in the operator's state. A two channel tetrapolar REG, a single channel forearm impedance plethysmograph, a Lead I electrocardiogram (ECG) and a 21 channel EEG were used to measure subject responses to various visual-motor cognitive tasks. Testing, analytical, and display procedures for EEG and REG monitoring were developed that extend the state of the art and provide a valuable tool for the study of cerebral circulatory and neural activity during cognition.
NASA Astrophysics Data System (ADS)
Zakynthinaki, M. S.; Stirling, J. R.
2007-01-01
Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.
MAIN software for density averaging, model building, structure refinement and validation
Turk, Dušan
2013-01-01
MAIN is software that has been designed to interactively perform the complex tasks of macromolecular crystal structure determination and validation. Using MAIN, it is possible to perform density modification, manual and semi-automated or automated model building and rebuilding, real- and reciprocal-space structure optimization and refinement, map calculations and various types of molecular structure validation. The prompt availability of various analytical tools and the immediate visualization of molecular and map objects allow a user to efficiently progress towards the completed refined structure. The extraordinary depth perception of molecular objects in three dimensions that is provided by MAIN is achieved by the clarity and contrast of colours and the smooth rotation of the displayed objects. MAIN allows simultaneous work on several molecular models and various crystal forms. The strength of MAIN lies in its manipulation of averaged density maps and molecular models when noncrystallographic symmetry (NCS) is present. Using MAIN, it is possible to optimize NCS parameters and envelopes and to refine the structure in single or multiple crystal forms. PMID:23897458
Zhan, Hanyu; Voelz, David G; Cho, Sang-Yeon; Xiao, Xifeng
2015-11-20
The estimation of the refractive index from optical scattering off a target's surface is an important task for remote sensing applications. Optical polarimetry is an approach that shows promise for refractive index estimation. However, this estimation often relies on polarimetric models that are limited to specular targets involving single surface scattering. Here, an analytic model is developed for the degree of polarization (DOP) associated with reflection from a rough surface that includes the effect of diffuse scattering. A multiplicative factor is derived to account for the diffuse component and evaluation of the model indicates that diffuse scattering can significantly affect the DOP values. The scattering model is used in a new approach for refractive index estimation from a series of DOP values that involves jointly estimating n, k, and ρ(d)with a nonlinear equation solver. The approach is shown to work well with simulation data and additive noise. When applied to laboratory-measured DOP values, the approach produces significantly improved index estimation results relative to reference values.
EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.
Hadinia, M; Jafari, R; Soleimani, M
2016-06-01
This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.
NASA Technical Reports Server (NTRS)
Freeman, William T.; Ilcewicz, L. B.; Swanson, G. D.; Gutowski, T.
1992-01-01
A conceptual and preliminary designers' cost prediction model has been initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a data base and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. The approach, goals, plans, and progress is presented for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).
NASA Astrophysics Data System (ADS)
Chetan; Narasimhulu, A.; Ghosh, S.; Rao, P. V.
2015-07-01
Machinability of titanium is poor due to its low thermal conductivity and high chemical affinity. Lower thermal conductivity of titanium alloy is undesirable on the part of cutting tool causing extensive tool wear. The main task of this work is to predict the various wear mechanisms involved during machining of Ti alloy (Ti6Al4V) and to formulate an analytical mathematical tool wear model for the same. It has been found from various experiments that adhesive and diffusion wear are the dominating wear during machining of Ti alloy with PVD coated tungsten carbide tool. It is also clear from the experiments that the tool wear increases with the increase in cutting parameters like speed, feed and depth of cut. The wear model was validated by carrying out dry machining of Ti alloy at suitable cutting conditions. It has been found that the wear model is able to predict the flank wear suitably under gentle cutting conditions.
JT8D and JT9D jet engine performance improvement program. Task 1: Feasibility analysis
NASA Technical Reports Server (NTRS)
Gaffin, W. O.; Webb, D. E.
1979-01-01
JT8D and JT9D component performance improvement concepts which have a high probability of incorporation into production engines were identified and ranked. An evaluation method based on airline payback period was developed for the purpose of identifying the most promising concepts. The method used available test data and analytical models along with conceptual/preliminary designs to predict the performance improvements, weight, installation characteristics, cost for new production and retrofit, maintenance cost, and qualitative characteristics of candidate concepts. These results were used to arrive at the concept payback period, which is the time required for an airline to recover the investment cost of concept implementation.
Split-field FDTD method for oblique incidence study of periodic dispersive metallic structures.
Baida, F I; Belkhir, A
2009-08-15
The study of periodic structures illuminated by a normally incident plane wave is a simple task that can be numerically simulated by the finite-difference time-domain (FDTD) method. On the contrary, for off-normal incidence, a widely modified algorithm must be developed in order to bypass the frequency dependence appearing in the periodic boundary conditions. After recently implementing this FDTD algorithm for pure dielectric materials, we here extend it to the study of metallic structures where dispersion can be described by analytical models. The accuracy of our code is demonstrated through comparisons with already-published results in the case of 1D and 3D structures.
Katz, Deirdre A; Peckins, Melissa K
2017-12-01
Intraindividual variability in stress responsivity and the interrelationship of multiple neuroendocrine systems make a multisystem analytic approach to examining the human stress response challenging. The present study makes use of an efficient social-evaluative stress paradigm - the Group Public Speaking Task for Adolescents (GPST-A) - to examine the hypothalamic-pituitary-adrenocortical (HPA)-axis and Autonomic Nervous System (ANS) reactivity profiles of 54 adolescents with salivary cortisol and salivary alpha-amylase (sAA). First, we account for individuals' time latency of hormone concentrations between individuals. Second, we use a two-piece multilevel growth curve model with landmark registration to examine the reactivity and recovery periods of the stress response separately. This analytic approach increases the models' sensitivity to detecting trajectory differences in the reactivity and recovery phases of the stress response and allows for interindividual variation in the timing of participants' peak response following a social-evaluative stressor. The GPST-A evoked typical cortisol and sAA responses in both males and females. Males' cortisol concentrations were significantly higher than females' during each phase of the response. We found no gender difference in the sAA response. However, the rate of increase in sAA as well as overall sAA secretion across the study were associated with steeper rates of cortisol reactivity and recovery. This study demonstrates a way to model the response trajectories of salivary biomarkers of the HPA-axis and ANS when taking a multisystem approach to neuroendocrine research that enables researchers to make conclusions about the reactivity and recovery phases of the HPA-axis and ANS responses. As the study of the human stress response progresses toward a multisystem analytic approach, it is critical that individual variability in peak latency be taken into consideration and that accurate modeling techniques capture individual variability in the stress response so that accurate conclusions can be made about separate phases of the response. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ramful, Ajay; Ho, Siew Yin; Lowrie, Tom
2015-12-01
This inquiry presents two fine-grained case studies of students demonstrating different levels of cognitive functioning in relation to bilateral symmetry and reflection. The two students were asked to solve four sets of tasks and articulate their reasoning in task-based interviews. The first participant, Brittany, focused essentially on three criteria, namely (1) equidistance, (2) congruence of sides and (3) `exactly opposite' as the intuitive counterpart of perpendicularity for performing reflection. On the other hand, the second participant, Sara, focused on perpendicularity and equidistance, as is the normative procedure. Brittany's inadequate knowledge of reflection shaped her actions and served as a validation for her solutions. Intuitively, her visual strategies took over as a fallback measure to maintain congruence of sides in the absence of a formal notion of perpendicularity. In this paper, we address some of the well-known constraints that students encounter in dealing with bilateral symmetry and reflection, particularly situations involving inclined line of symmetry. Importantly, we make an attempt to show how visual and analytical strategies interact in the production of a reflected image. Our findings highlight the necessity to give more explicit attention to the notion of perpendicularity in bilateral symmetry and reflection tasks.
Vaxenburg, Roman; Wyche, Isis; Svoboda, Karel; Efros, Alexander L.
2018-01-01
Vibrations are important cues for tactile perception across species. Whisker-based sensation in mice is a powerful model system for investigating mechanisms of tactile perception. However, the role vibration plays in whisker-based sensation remains unsettled, in part due to difficulties in modeling the vibration of whiskers. Here, we develop an analytical approach to calculate the vibrations of whiskers striking objects. We use this approach to quantify vibration forces during active whisker touch at a range of locations along the whisker. The frequency and amplitude of vibrations evoked by contact are strongly dependent on the position of contact along the whisker. The magnitude of vibrational shear force and bending moment is comparable to quasi-static forces. The fundamental vibration frequencies are in a detectable range for mechanoreceptor properties and below the maximum spike rates of primary sensory afferents. These results suggest two dynamic cues exist that rodents can use for object localization: vibration frequency and comparison of vibrational to quasi-static force magnitude. These complement the use of quasi-static force angle as a distance cue, particularly for touches close to the follicle, where whiskers are stiff and force angles hardly change during touch. Our approach also provides a general solution to calculation of whisker vibrations in other sensing tasks. PMID:29584719
Blair, Clancy; Raver, C Cybele; Berry, Daniel J
2014-02-01
In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory control, working memory, and attention shifting. Residualized change analysis indicated that higher quality parenting as indicated by higher scores on widely used measures of parenting at both earlier and later time points predicted more positive gain in executive function at 60 months. Latent change score models in which parenting and executive function over time were held to standards of longitudinal measurement invariance provided additional evidence of the association between change in parenting quality and change in executive function. In these models, cross-lagged paths indicated that in addition to parenting predicting change in executive function, executive function bidirectionally predicted change in parenting quality. Results were robust with the addition of covariates, including child sex, race, maternal education, and household income-to-need. Strengths and drawbacks of the 2 analytic approaches are discussed, and the findings are considered in light of emerging methodological innovations for testing the extent to which executive function is malleable and open to the influence of experience.
Modeling human pilot cue utilization with applications to simulator fidelity assessment.
Zeyada, Y; Hess, R A
2000-01-01
An analytical investigation to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator was undertaken. Data from a NASA Ames Research Center vertical motion simulator study of a simple, single-degree-of-freedom rotorcraft bob-up/down maneuver were employed in the investigation. The study was part of a larger research effort that has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system that included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle, and the motion system. With the exception of time delays that accrued in visual scene production in the simulator, visual scene effects were not included in this study. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity that occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots who participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to identify changes in simulator fidelity for the task examined.
Lassnig, R.; Striedinger, B.; Hollerer, M.; Fian, A.; Stadlober, B.; Winkler, A.
2015-01-01
The fabrication of organic thin film transistors with highly reproducible characteristics presents a very challenging task. We have prepared and analyzed model pentacene thin film transistors under ultra-high vacuum conditions, employing surface analytical tools and methods. Intentionally contaminating the gold contacts and SiO2 channel area with carbon through repeated adsorption, dissociation, and desorption of pentacene proved to be very advantageous in the creation of devices with stable and reproducible parameters. We mainly focused on the device properties, such as mobility and threshold voltage, as a function of film morphology and preparation temperature. At 300 K, pentacene displays Stranski-Krastanov growth, whereas at 200 K fine-grained, layer-like film growth takes place, which predominantly influences the threshold voltage. Temperature dependent mobility measurements demonstrate good agreement with the established multiple trapping and release model, which in turn indicates a predominant concentration of shallow traps in the crystal grains and at the oxide-semiconductor interface. Mobility and threshold voltage measurements as a function of coverage reveal that up to four full monolayers contribute to the overall charge transport. A significant influence on the effective mobility also stems from the access resistance at the gold contact-semiconductor interface, which is again strongly influenced by the temperature dependent, characteristic film growth mode. PMID:25814770
NASA Astrophysics Data System (ADS)
Lassnig, R.; Striedinger, B.; Hollerer, M.; Fian, A.; Stadlober, B.; Winkler, A.
2014-09-01
The fabrication of organic thin film transistors with highly reproducible characteristics presents a very challenging task. We have prepared and analyzed model pentacene thin film transistors under ultra-high vacuum conditions, employing surface analytical tools and methods. Intentionally contaminating the gold contacts and SiO2 channel area with carbon through repeated adsorption, dissociation, and desorption of pentacene proved to be very advantageous in the creation of devices with stable and reproducible parameters. We mainly focused on the device properties, such as mobility and threshold voltage, as a function of film morphology and preparation temperature. At 300 K, pentacene displays Stranski-Krastanov growth, whereas at 200 K fine-grained, layer-like film growth takes place, which predominantly influences the threshold voltage. Temperature dependent mobility measurements demonstrate good agreement with the established multiple trapping and release model, which in turn indicates a predominant concentration of shallow traps in the crystal grains and at the oxide-semiconductor interface. Mobility and threshold voltage measurements as a function of coverage reveal that up to four full monolayers contribute to the overall charge transport. A significant influence on the effective mobility also stems from the access resistance at the gold contact-semiconductor interface, which is again strongly influenced by the temperature dependent, characteristic film growth mode.
Studying Upper-Limb Kinematics Using Inertial Sensors Embedded in Mobile Phones
Bennett, Paul
2015-01-01
Background In recent years, there has been a great interest in analyzing upper-limb kinematics. Inertial measurement with mobile phones is a convenient and portable analysis method for studying humerus kinematics in terms of angular mobility and linear acceleration. Objective The aim of this analysis was to study upper-limb kinematics via mobile phones through six physical properties that correspond to angular mobility and acceleration in the three axes of space. Methods This cross-sectional study recruited healthy young adult subjects. Humerus kinematics was studied in 10 young adults with the iPhone4. They performed flexion and abduction analytical tasks. Mobility angle and lineal acceleration in each of its axes (yaw, pitch, and roll) were obtained with the iPhone4. This device was placed on the right half of the body of each subject, in the middle third of the humerus, slightly posterior. Descriptive statistics were calculated. Results Descriptive graphics of analytical tasks performed were obtained. The biggest range of motion was found in pitch angle, and the biggest acceleration was found in the y-axis in both analytical tasks. Focusing on tridimensional kinematics, bigger range of motion and acceleration was found in abduction (209.69 degrees and 23.31 degrees per second respectively). Also, very strong correlation was found between angular mobility and linear acceleration in abduction (r=.845) and flexion (r=.860). Conclusions The use of an iPhone for humerus tridimensional kinematics is feasible. This supports use of the mobile phone as a device to analyze upper-limb kinematics and to facilitate the evaluation of the patient. PMID:28582241
Gray, Stephen J; Gallo, David A
2016-02-01
Belief in paranormal psychic phenomena is widespread in the United States, with over a third of the population believing in extrasensory perception (ESP). Why do some people believe, while others are skeptical? According to the cognitive differences hypothesis, individual differences in the way people process information about the world can contribute to the creation of psychic beliefs, such as differences in memory accuracy (e.g., selectively remembering a fortune teller's correct predictions) or analytical thinking (e.g., relying on intuition rather than scrutinizing evidence). While this hypothesis is prevalent in the literature, few have attempted to empirically test it. Here, we provided the most comprehensive test of the cognitive differences hypothesis to date. In 3 studies, we used online screening to recruit groups of strong believers and strong skeptics, matched on key demographics (age, sex, and years of education). These groups were then tested in laboratory and online settings using multiple cognitive tasks and other measures. Our cognitive testing showed that there were no consistent group differences on tasks of episodic memory distortion, autobiographical memory distortion, or working memory capacity, but skeptics consistently outperformed believers on several tasks tapping analytical or logical thinking as well as vocabulary. These findings demonstrate cognitive similarities and differences between these groups and suggest that differences in analytical thinking and conceptual knowledge might contribute to the development of psychic beliefs. We also found that psychic belief was associated with greater life satisfaction, demonstrating benefits associated with psychic beliefs and highlighting the role of both cognitive and noncognitive factors in understanding these individual differences.
Rewards and creative performance: a meta-analytic test of theoretically derived hypotheses.
Byron, Kris; Khazanchi, Shalini
2012-07-01
Although many scholars and practitioners are interested in understanding how to motivate individuals to be more creative, whether and how rewards affect creativity remain unclear. We argue that the conflicting evidence may be due to differences between studies in terms of reward conditions and the context in which rewards are offered. Specifically, we examine 5 potential moderators of the rewards-creative performance relationship: (a) the reward contingency, (b) the extent to which participants are provided information about their past or current creative performance, (c) the extent to which the reward and context offer choice or impose control, (d) the extent to which the context serves to enhance task engagement, and (e) the extent to which the performance tasks are complex. Using random-effects models, we meta-analyzed 60 experimental and nonexperimental studies (including 69 independent samples) that examined the rewards-creativity relationship with children or adults. Our results suggest that creativity-contingent rewards tend to increase creative performance-and are more positively related to creative performance when individuals are given more positive, contingent, and task-focused performance feedback and are provided more choice (and are less controlled). In contrast, performance-contingent or completion-contingent rewards tend to have a slight negative effect on creative performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean; Plaisant, Catherine; Whiting, Mark A.
The evaluation of visual analytics environments was a topic in Illuminating the Path [Thomas 2005] as a critical aspect of moving research into practice. For a thorough understanding of the utility of the systems available, evaluation not only involves assessing the visualizations, interactions or data processing algorithms themselves, but also the complex processes that a tool is meant to support (such as exploratory data analysis and reasoning, communication through visualization, or collaborative data analysis [Lam 2012; Carpendale 2007]). Researchers and practitioners in the field have long identified many of the challenges faced when planning, conducting, and executing an evaluation ofmore » a visualization tool or system [Plaisant 2004]. Evaluation is needed to verify that algorithms and software systems work correctly and that they represent improvements over the current infrastructure. Additionally to effectively transfer new software into a working environment, it is necessary to ensure that the software has utility for the end-users and that the software can be incorporated into the end-user’s infrastructure and work practices. Evaluation test beds require datasets, tasks, metrics and evaluation methodologies. As noted in [Thomas 2005] it is difficult and expensive for any one researcher to setup an evaluation test bed so in many cases evaluation is setup for communities of researchers or for various research projects or programs. Examples of successful community evaluations can be found [Chinchor 1993; Voorhees 2007; FRGC 2012]. As visual analytics environments are intended to facilitate the work of human analysts, one aspect of evaluation needs to focus on the utility of the software to the end-user. This requires representative users, representative tasks, and metrics that measure the utility to the end-user. This is even more difficult as now one aspect of the test methodology is access to representative end-users to participate in the evaluation. In many cases the sensitive nature of data and tasks and difficult access to busy analysts puts even more of a burden on researchers to complete this type of evaluation. User-centered design goes beyond evaluation and starts with the user [Beyer 1997, Shneiderman 2009]. Having some knowledge of the type of data, tasks, and work practices helps researchers and developers know the correct paths to pursue in their work. When access to the end-users is problematic at best and impossible at worst, user-centered design becomes difficult. Researchers are unlikely to go to work on the type of problems faced by inaccessible users. Commercial vendors have difficulties evaluating and improving their products when they cannot observe real users working with their products. In well-established fields such as web site design or office software design, user-interface guidelines have been developed based on the results of empirical studies or the experience of experts. Guidelines can speed up the design process and replace some of the need for observation of actual users [heuristics review references]. In 2006 when the visual analytics community was initially getting organized, no such guidelines existed. Therefore, we were faced with the problem of developing an evaluation framework for the field of visual analytics that would provide representative situations and datasets, representative tasks and utility metrics, and finally a test methodology which would include a surrogate for representative users, increase interest in conducting research in the field, and provide sufficient feedback to the researchers so that they could improve their systems.« less
Chase, Henry W; Clos, Mareike; Dibble, Sofia; Fox, Peter; Grace, Anthony A; Phillips, Mary L; Eickhoff, Simon B
2015-06-01
Previous studies, predominantly in experimental animals, have suggested the presence of a differentiation of function across the hippocampal formation. In rodents, ventral regions are thought to be involved in emotional behavior while dorsal regions mediate cognitive or spatial processes. Using a combination of modeling the co-occurrence of significant activations across thousands of neuroimaging experiments and subsequent data-driven clustering of these data we were able to provide evidence of distinct subregions within a region corresponding to the human subiculum, a critical hub within the hippocampal formation. This connectivity-based model consists of a bilateral anterior region, as well as separate posterior and intermediate regions on each hemisphere. Functional connectivity assessed both by meta-analytic and resting fMRI approaches revealed that more anterior regions were more strongly connected to the default mode network, and more posterior regions were more strongly connected to 'task positive' regions. In addition, our analysis revealed that the anterior subregion was functionally connected to the ventral striatum, midbrain and amygdala, a circuit that is central to models of stress and motivated behavior. Analysis of a behavioral taxonomy provided evidence for a role for each subregion in mnemonic processing, as well as implication of the anterior subregion in emotional and visual processing and the right posterior subregion in reward processing. These findings lend support to models which posit anterior-posterior differentiation of function within the human hippocampal formation and complement other early steps toward a comparative (cross-species) model of the region. Copyright © 2015 Elsevier Inc. All rights reserved.
Modeling and Simulation of an UAS Collision Avoidance Systems
NASA Technical Reports Server (NTRS)
Oliveros, Edgardo V.; Murray, A. Jennifer
2010-01-01
This paper describes a Modeling and Simulation of an Unmanned Aircraft Systems (UAS) Collision Avoidance System, capable of representing different types of scenarios for UAS collision avoidance. Commercial and military piloted aircraft currently utilize various systems for collision avoidance such as Traffic Alert and Collision A voidance System (TCAS), Automatic Dependent Surveillance-Broadcast (ADS-B), Radar and ElectroOptical and Infrared Sensors (EO-IR). The integration of information from these systems is done by the pilot in the aircraft to determine the best course of action. In order to operate optimally in the National Airspace System (NAS) UAS have to work in a similar or equivalent manner to a piloted aircraft by applying the principle of "detect-see and avoid" (DSA) to other air traffic. Hence, we have taken these existing sensor technologies into consideration in order to meet the challenge of researching the modeling and simulation of an approximated DSA system. A Schematic Model for a UAS Collision Avoidance System (CAS) has been developed ina closed loop block diagram for that purpose. We have found that the most suitable software to carry out this task is the Satellite Tool Kit (STK) from Analytical Graphics Inc. (AGI). We have used the Aircraft Mission Modeler (AMM) for modeling and simulation of a scenario where a UAS is placed on a possible collision path with an initial intruder and then with a second intruder, but is able to avoid them by executing a right tum maneuver and then climbing. Radars have also been modeled with specific characteristics for the UAS and both intruders. The software provides analytical, graphical user interfaces and data controlling tools which allow the operator to simulate different conditions. Extensive simulations have been carried out which returned excellent results.
Aston, Elizabeth R; Metrik, Jane; Amlung, Michael; Kahler, Christopher W; MacKillop, James
2016-12-01
Distinct behavioral economic domains, including high perceived drug value (demand) and delay discounting (DD), have been implicated in the initiation of drug use and the progression to dependence. However, it is unclear whether frequent marijuana users conform to a "reinforcer pathology" addiction model wherein marijuana demand and DD jointly increase risk for problematic marijuana use and cannabis dependence (CD). Participants (n=88, 34% female, 14% cannabis dependent) completed a marijuana purchase task at baseline. A delay discounting task was completed following placebo marijuana cigarette (0% THC) administration during a separate experimental session. Marijuana demand and DD were quantified using area under the curve (AUC). In multiple regression models, demand uniquely predicted frequency of marijuana use while DD did not. In contrast, DD uniquely predicted CD symptom count while demand did not. There were no significant interactions between demand and DD in either model. These findings suggest that frequent marijuana users exhibit key constituents of the reinforcer pathology model: high marijuana demand and steep discounting of delayed rewards. However, demand and DD appear to be independent rather than synergistic risk factors for elevated marijuana use and risk for progression to CD. Findings also provide support for using AUC as a singular marijuana demand metric, particularly when also examining other behavioral economic constructs that apply similar statistical approaches, such as DD, to support analytic methodological convergence. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Logan, J. R.; Pulvermacher, M. K.
1991-01-01
Range Scheduling Aid (RSA) is presented in the form of the viewgraphs. The following subject areas are covered: satellite control network; current and new approaches to range scheduling; MITRE tasking; RSA features; RSA display; constraint based analytic capability; RSA architecture; and RSA benefits.
Cooperation and Patience: The Key To A High Quality, Sustainable GIS
DOT National Transportation Integrated Search
1998-09-16
Geographic Information Systems (GIS) provides a powerful tool to transportation : planners and engineers for a variety of analytical tasks. However, even with : the advent of PC-based GIS systems and strong state and federal support, : transportation...
Dyslexia, an Imbalance in Cerebral Information-Processing Strategies.
ERIC Educational Resources Information Center
Aaron, P. G.
1978-01-01
Twenty-eight reading disabled children (in grades 2-4) were divided (on the basis of the nature of errors made in a writing from dictation task), into two groups--analytic-sequential deficient and holistic-simultaneous deficient. (Author/PHR)
Technical engineering services in support of the Nike-Tomahawk sounding rocket vehicle system
NASA Technical Reports Server (NTRS)
1972-01-01
Task assignments in support of the Nike-Tomahawk vehicles, which were completed from May, 1970 through November 1972 are reported. The services reported include: analytical, design and drafting, fabrication and modification, and field engineering.