A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.
Wu, Guanlin; Bao, Weidong; Zhu, Xiaomin; Zhang, Xiongtao
2018-05-23
The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.
Naïve and Robust: Class-Conditional Independence in Human Classification Learning
ERIC Educational Resources Information Center
Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D.
2018-01-01
Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…
2008-02-27
between the PHY layer and for example a host PC computer . The PC wants to generate and receive a sequence of data packets. The PC may also want to send...the testbed is quite similar. Given the intense computational requirements of SVD and other matrix mode operations needed to support eigen spreading a...platform for real time operation. This task is probably the major challenge in the development of the testbed. All compute intensive tasks will be
The Effects of Study Tasks in a Computer-Based Chemistry Learning Environment
NASA Astrophysics Data System (ADS)
Urhahne, Detlef; Nick, Sabine; Poepping, Anna Christin; Schulz, Sarah Jayne
2013-12-01
The present study examines the effects of different study tasks on the acquisition of knowledge about acids and bases in a computer-based learning environment. Three different task formats were selected to create three treatment conditions: learning with gap-fill and matching tasks, learning with multiple-choice tasks, and learning only from text and figures without any additional tasks. Participants were 196 ninth-grade students who learned with a self-developed multimedia program in a pretest-posttest control group design. Research results reveal that gap-fill and matching tasks were most effective in promoting knowledge acquisition, followed by multiple-choice tasks, and no tasks at all. The findings are in line with previous research on this topic. The effects can possibly be explained by the generation-recognition model, which predicts that gap-fill and matching tasks trigger more encompassing learning processes than multiple-choice tasks. It is concluded that instructional designers should incorporate more challenging study tasks for enhancing the effectiveness of computer-based learning environments.
Dynamically allocating sets of fine-grained processors to running computations
NASA Technical Reports Server (NTRS)
Middleton, David
1988-01-01
Researchers explore an approach to using general purpose parallel computers which involves mapping hardware resources onto computations instead of mapping computations onto hardware. Problems such as processor allocation, task scheduling and load balancing, which have traditionally proven to be challenging, change significantly under this approach and may become amenable to new attacks. Researchers describe the implementation of this approach used by the FFP Machine whose computation and communication resources are repeatedly partitioned into disjoint groups that match the needs of available tasks from moment to moment. Several consequences of this system are examined.
Coiera, E
2016-11-10
Anyone with knowledge of information systems has experienced frustration when it comes to system implementation or use. Unanticipated challenges arise frequently and unanticipated consequences may follow. Working from first principles, to understand why information technology (IT) is often challenging, identify which IT endeavors are more likely to succeed, and predict the best role that technology can play in different tasks and settings. The fundamental purpose of IT is to enhance our ability to undertake tasks, supplying new information that changes what we decide and ultimately what occurs in the world. The value of this information (VOI) can be calculated at different stages of the decision-making process and will vary depending on how technology is used. We can imagine a task space that describes the relative benefits of task completion by humans or computers and that contains specific areas where humans or computers are superior. There is a third area where neither is strong and a final joint workspace where humans and computers working in partnership produce the best results. By understanding that information has value and that VOI can be quantified, we can make decisions about how best to support the work we do. Evaluation of the expected utility of task completion by humans or computers should allow us to decide whether solutions should depend on technology, humans, or a partnership between the two.
Climate Ocean Modeling on Parallel Computers
NASA Technical Reports Server (NTRS)
Wang, P.; Cheng, B. N.; Chao, Y.
1998-01-01
Ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change. However, modeling the ocean circulation at various spatial and temporal scales is a very challenging computational task.
Dashboard Task Monitor for Managing ATLAS User Analysis on the Grid
NASA Astrophysics Data System (ADS)
Sargsyan, L.; Andreeva, J.; Jha, M.; Karavakis, E.; Kokoszkiewicz, L.; Saiz, P.; Schovancova, J.; Tuckett, D.; Atlas Collaboration
2014-06-01
The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.
Experience with a UNIX based batch computing facility for H1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhards, R.; Kruener-Marquis, U.; Szkutnik, Z.
1994-12-31
A UNIX based batch computing facility for the H1 experiment at DESY is described. The ultimate goal is to replace the DESY IBM mainframe by a multiprocessor SGI Challenge series computer, using the UNIX operating system, for most of the computing tasks in H1.
ERIC Educational Resources Information Center
Lu, Zhihong; Wang, Yanfei
2014-01-01
The effective design of test items within a computer-based language test (CBLT) for developing English as a foreign language (EFL) learners' listening and speaking skills has become an increasingly challenging task for both test users and test designers compared with that of pencil-and-paper tests in the past. It needs to fit integrated oral…
Satellite Tasking via a Tablet Computer
2015-09-01
connectivity have helped to overcome the challenges of information delivery , but there remains the challenge of real-time information. This thesis...have helped to overcome the challenges of information delivery , but there remains the challenge of real-time information. This thesis examines the...76 3. Integration with Existing Programs for Access and Dissemination of Imagery
Opportunistic Computing with Lobster: Lessons Learned from Scaling up to 25k Non-Dedicated Cores
NASA Astrophysics Data System (ADS)
Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Yannakopoulos, Anna; Tovar, Benjamin; Donnelly, Patrick; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2017-10-01
We previously described Lobster, a workflow management tool for exploiting volatile opportunistic computing resources for computation in HEP. We will discuss the various challenges that have been encountered while scaling up the simultaneous CPU core utilization and the software improvements required to overcome these challenges. Categories: Workflows can now be divided into categories based on their required system resources. This allows the batch queueing system to optimize assignment of tasks to nodes with the appropriate capabilities. Within each category, limits can be specified for the number of running jobs to regulate the utilization of communication bandwidth. System resource specifications for a task category can now be modified while a project is running, avoiding the need to restart the project if resource requirements differ from the initial estimates. Lobster now implements time limits on each task category to voluntarily terminate tasks. This allows partially completed work to be recovered. Workflow dependency specification: One workflow often requires data from other workflows as input. Rather than waiting for earlier workflows to be completed before beginning later ones, Lobster now allows dependent tasks to begin as soon as sufficient input data has accumulated. Resource monitoring: Lobster utilizes a new capability in Work Queue to monitor the system resources each task requires in order to identify bottlenecks and optimally assign tasks. The capability of the Lobster opportunistic workflow management system for HEP computation has been significantly increased. We have demonstrated efficient utilization of 25 000 non-dedicated cores and achieved a data input rate of 30 Gb/s and an output rate of 500GB/h. This has required new capabilities in task categorization, workflow dependency specification, and resource monitoring.
LUNGx Challenge for computerized lung nodule classification
Armato, Samuel G.; Drukker, Karen; Li, Feng; ...
2016-12-19
The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants’ computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. We present ten groups that applied their own methods to 73 lung nodules (37 benign and 36 malignant) thatmore » were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists’ AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. Lastly, the continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.« less
LUNGx Challenge for computerized lung nodule classification
Armato, Samuel G.; Drukker, Karen; Li, Feng; Hadjiiski, Lubomir; Tourassi, Georgia D.; Engelmann, Roger M.; Giger, Maryellen L.; Redmond, George; Farahani, Keyvan; Kirby, Justin S.; Clarke, Laurence P.
2016-01-01
Abstract. The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants’ computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. Ten groups applied their own methods to 73 lung nodules (37 benign and 36 malignant) that were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists’ AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. The continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community. PMID:28018939
LUNGx Challenge for computerized lung nodule classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armato, Samuel G.; Drukker, Karen; Li, Feng
The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants’ computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. We present ten groups that applied their own methods to 73 lung nodules (37 benign and 36 malignant) thatmore » were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists’ AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. Lastly, the continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.« less
ERIC Educational Resources Information Center
Kortsarts, Yana; Fischbach, Adam; Rufinus, Jeff; Utell, Janine M.; Yoon, Suk-Chung
2010-01-01
Developing and applying oral and written communication skills in the undergraduate computer science and computer information systems curriculum--one of the ABET accreditation requirements - is a very challenging and, at the same time, a rewarding task that provides various opportunities to enrich the undergraduate computer science and computer…
Parallel task processing of very large datasets
NASA Astrophysics Data System (ADS)
Romig, Phillip Richardson, III
This research concerns the use of distributed computer technologies for the analysis and management of very large datasets. Improvements in sensor technology, an emphasis on global change research, and greater access to data warehouses all are increase the number of non-traditional users of remotely sensed data. We present a framework for distributed solutions to the challenges of datasets which exceed the online storage capacity of individual workstations. This framework, called parallel task processing (PTP), incorporates both the task- and data-level parallelism exemplified by many image processing operations. An implementation based on the principles of PTP, called Tricky, is also presented. Additionally, we describe the challenges and practical issues in modeling the performance of parallel task processing with large datasets. We present a mechanism for estimating the running time of each unit of work within a system and an algorithm that uses these estimates to simulate the execution environment and produce estimated runtimes. Finally, we describe and discuss experimental results which validate the design. Specifically, the system (a) is able to perform computation on datasets which exceed the capacity of any one disk, (b) provides reduction of overall computation time as a result of the task distribution even with the additional cost of data transfer and management, and (c) in the simulation mode accurately predicts the performance of the real execution environment.
From an Executive Network to Executive Control: A Computational Model of the "n"-Back Task
ERIC Educational Resources Information Center
Chatham, Christopher H.; Herd, Seth A.; Brant, Angela M.; Hazy, Thomas E.; Miyake, Akira; O'Reilly, Randy; Friedman, Naomi P.
2011-01-01
A paradigmatic test of executive control, the n-back task, is known to recruit a widely distributed parietal, frontal, and striatal "executive network," and is thought to require an equally wide array of executive functions. The mapping of functions onto substrates in such a complex task presents a significant challenge to any theoretical…
Generic algorithms for high performance scalable geocomputing
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2016-04-01
During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system. This contrasts with practices in which code for distributing of compute tasks is mixed with model-specific code, and results in a better maintainable model. For flexibility and efficiency, the algorithms are configurable at compile-time with the respect to the following aspects: data type, value type, no-data handling, input value domain handling, and output value range handling. This makes the algorithms usable in very different contexts, without the need for making intrusive changes to existing models when using them. Applications that benefit from using the Fern library include the construction of forward simulation models in (global) hydrology (e.g. PCR-GLOBWB (Van Beek et al. 2011)), ecology, geomorphology, or land use change (e.g. PLUC (Verstegen et al. 2014)) and manipulation of hyper-resolution land surface data such as digital elevation models and remote sensing data. Using the Fern library, we have also created an add-on to the PCRaster Python Framework (Karssenberg et al. 2010) allowing its users to speed up their spatio-temporal models, sometimes by changing just a single line of Python code in their model. In our presentation we will give an overview of the design of the algorithms, providing examples of different contexts where they can be used to replace existing sequential algorithms, including the PCRaster environmental modeling software (www.pcraster.eu). We will show how the algorithms can be configured to behave differently when necessary. References Karssenberg, D., Schmitz, O., Salamon, P., De Jong, K. and Bierkens, M.F.P., 2010, A software framework for construction of process-based stochastic spatio-temporal models and data assimilation. Environmental Modelling & Software, 25, pp. 489-502, Link. Best Paper Award 2010: Software and Decision Support. Van Beek, L. P. H., Y. Wada, and M. F. P. Bierkens. 2011. Global monthly water stress: 1. Water balance and water availability. Water Resources Research. 47. Verstegen, J. A., D. Karssenberg, F. van der Hilst, and A. P. C. Faaij. 2014. Identifying a land use change cellular automaton by Bayesian data assimilation. Environmental Modelling & Software 53:121-136.
Editorial: Cognitive Architectures, Model Comparison and AGI
NASA Astrophysics Data System (ADS)
Lebiere, Christian; Gonzalez, Cleotilde; Warwick, Walter
2010-12-01
Cognitive Science and Artificial Intelligence share compatible goals of understanding and possibly generating broadly intelligent behavior. In order to determine if progress is made, it is essential to be able to evaluate the behavior of complex computational models, especially those built on general cognitive architectures, and compare it to benchmarks of intelligent behavior such as human performance. Significant methodological challenges arise, however, when trying to extend approaches used to compare model and human performance from tightly controlled laboratory tasks to complex tasks involving more open-ended behavior. This paper describes a model comparison challenge built around a dynamic control task, the Dynamic Stocks and Flows. We present and discuss distinct approaches to evaluating performance and comparing models. Lessons drawn from this challenge are discussed in light of the challenge of using cognitive architectures to achieve Artificial General Intelligence.
Incrementally Dissociating Syntax and Semantics
ERIC Educational Resources Information Center
Brennan, Jonathan R.
2010-01-01
A basic challenge for research into the neurobiology of language is understanding how the brain combines words to make complex representations. Linguistic theory divides this task into several computations including syntactic structure building and semantic composition. The close relationship between these computations, however, poses a strong…
Does Mood Change How We Organize Digital Files?
ERIC Educational Resources Information Center
Massey, Charlotte
2017-01-01
Retrieving files from one's computer is done daily and is an essential part of completing most tasks at work, yet surprisingly little research has examined the ways that people structure and organize their files. Management of personal digital information is a challenging task that users approach idiosyncratically. Large individual differences…
Costa Ferrer, Raquel; Serrano Rosa, Miguel Ángel; Zornoza Abad, Ana; Salvador Fernández-Montejo, Alicia
2010-11-01
The cardiovascular (CV) response to social challenge and stress is associated with the etiology of cardiovascular diseases. New ways of communication, time pressure and different types of information are common in our society. In this study, the cardiovascular response to two different tasks (open vs. closed information) was examined employing different communication channels (computer-mediated vs. face-to-face) and with different pace control (self vs. external). Our results indicate that there was a higher CV response in the computer-mediated condition, on the closed information task and in the externally paced condition. These role of these factors should be considered when studying the consequences of social stress and their underlying mechanisms.
VM Capacity-Aware Scheduling within Budget Constraints in IaaS Clouds
Thanasias, Vasileios; Lee, Choonhwa; Hanif, Muhammad; Kim, Eunsam; Helal, Sumi
2016-01-01
Recently, cloud computing has drawn significant attention from both industry and academia, bringing unprecedented changes to computing and information technology. The infrastructure-as-a-Service (IaaS) model offers new abilities such as the elastic provisioning and relinquishing of computing resources in response to workload fluctuations. However, because the demand for resources dynamically changes over time, the provisioning of resources in a way that a given budget is efficiently utilized while maintaining a sufficing performance remains a key challenge. This paper addresses the problem of task scheduling and resource provisioning for a set of tasks running on IaaS clouds; it presents novel provisioning and scheduling algorithms capable of executing tasks within a given budget, while minimizing the slowdown due to the budget constraint. Our simulation study demonstrates a substantial reduction up to 70% in the overall task slowdown rate by the proposed algorithms. PMID:27501046
VM Capacity-Aware Scheduling within Budget Constraints in IaaS Clouds.
Thanasias, Vasileios; Lee, Choonhwa; Hanif, Muhammad; Kim, Eunsam; Helal, Sumi
2016-01-01
Recently, cloud computing has drawn significant attention from both industry and academia, bringing unprecedented changes to computing and information technology. The infrastructure-as-a-Service (IaaS) model offers new abilities such as the elastic provisioning and relinquishing of computing resources in response to workload fluctuations. However, because the demand for resources dynamically changes over time, the provisioning of resources in a way that a given budget is efficiently utilized while maintaining a sufficing performance remains a key challenge. This paper addresses the problem of task scheduling and resource provisioning for a set of tasks running on IaaS clouds; it presents novel provisioning and scheduling algorithms capable of executing tasks within a given budget, while minimizing the slowdown due to the budget constraint. Our simulation study demonstrates a substantial reduction up to 70% in the overall task slowdown rate by the proposed algorithms.
Possibilities and Challenges of Learning German in a Multimodal Environment: A Case Study
ERIC Educational Resources Information Center
Abrams, Zsuzsanna Ittzes
2016-01-01
Despite a growing body of research on task-based language learning (TBLT) (Samuda & Bygate, 2008; Ellis, 2003), there is still little information available regarding the pedagogical design behind tasks and how they are implemented (Samuda & Bygate, 2008). Scholars in computer-mediated second language (L2) learning have called for research…
Probabilistic brains: knowns and unknowns
Pouget, Alexandre; Beck, Jeffrey M; Ma, Wei Ji; Latham, Peter E
2015-01-01
There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference. PMID:23955561
ERIC Educational Resources Information Center
West, Patti; Rutstein, Daisy Wise; Mislevy, Robert J.; Liu, Junhui; Choi, Younyoung; Levy, Roy; Crawford, Aaron; DiCerbo, Kristen E.; Chappel, Kristina; Behrens, John T.
2010-01-01
A major issue in the study of learning progressions (LPs) is linking student performance on assessment tasks to the progressions. This report describes the challenges faced in making this linkage using Bayesian networks to model LPs in the field of computer networking. The ideas are illustrated with exemplar Bayesian networks built on Cisco…
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
Computational approaches to protein inference in shotgun proteomics
2012-01-01
Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of proteins present in a sample has been recognized as an important computational task. This task generally consists of (1) assigning experimental tandem mass spectra to peptides derived from a protein database, and (2) mapping assigned peptides to proteins and quantifying the confidence of identified proteins. Protein identification is fundamentally a statistical inference problem with a number of methods proposed to address its challenges. In this review we categorize current approaches into rule-based, combinatorial optimization and probabilistic inference techniques, and present them using integer programing and Bayesian inference frameworks. We also discuss the main challenges of protein identification and propose potential solutions with the goal of spurring innovative research in this area. PMID:23176300
Sort-Mid tasks scheduling algorithm in grid computing.
Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M
2015-11-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.
Sort-Mid tasks scheduling algorithm in grid computing
Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.
2014-01-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937
Integrated Speech and Language Technology for Intelligence, Surveillance, and Reconnaissance (ISR)
2017-07-01
applying submodularity techniques to address computing challenges posed by large datasets in speech and language processing. MT and speech tools were...aforementioned research-oriented activities, the IT system administration team provided necessary support to laboratory computing and network operations...operations of SCREAM Lab computer systems and networks. Other miscellaneous activities in relation to Task Order 29 are presented in an additional fourth
Applying Human Computation Methods to Information Science
ERIC Educational Resources Information Center
Harris, Christopher Glenn
2013-01-01
Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…
ERIC Educational Resources Information Center
Blikstein, Paulo; Worsley, Marcelo
2016-01-01
New high-frequency multimodal data collection technologies and machine learning analysis techniques could offer new insights into learning, especially when students have the opportunity to generate unique, personalized artifacts, such as computer programs, robots, and solutions engineering challenges. To date most of the work on learning analytics…
Computational Natural Language Inference: Robust and Interpretable Question Answering
ERIC Educational Resources Information Center
Sharp, Rebecca Reynolds
2017-01-01
We address the challenging task of "computational natural language inference," by which we mean bridging two or more natural language texts while also providing an explanation of how they are connected. In the context of question answering (i.e., finding short answers to natural language questions), this inference connects the question…
Pagan, Marino
2014-01-01
The responses of high-level neurons tend to be mixtures of many different types of signals. While this diversity is thought to allow for flexible neural processing, it presents a challenge for understanding how neural responses relate to task performance and to neural computation. To address these challenges, we have developed a new method to parse the responses of individual neurons into weighted sums of intuitive signal components. Our method computes the weights by projecting a neuron's responses onto a predefined orthonormal basis. Once determined, these weights can be combined into measures of signal modulation; however, in their raw form these signal modulation measures are biased by noise. Here we introduce and evaluate two methods for correcting this bias, and we report that an analytically derived approach produces performance that is robust and superior to a bootstrap procedure. Using neural data recorded from inferotemporal cortex and perirhinal cortex as monkeys performed a delayed-match-to-sample target search task, we demonstrate how the method can be used to quantify the amounts of task-relevant signals in heterogeneous neural populations. We also demonstrate how these intuitive quantifications of signal modulation can be related to single-neuron measures of task performance (d′). PMID:24920017
Flexible Description Language for HPC based Processing of Remote Sense Data
NASA Astrophysics Data System (ADS)
Nandra, Constantin; Gorgan, Dorian; Bacu, Victor
2016-04-01
When talking about Big Data, the most challenging aspect lays in processing them in order to gain new insight, find new patterns and gain knowledge from them. This problem is likely most apparent in the case of Earth Observation (EO) data. With ever higher numbers of data sources and increasing data acquisition rates, dealing with EO data is indeed a challenge [1]. Geoscientists should address this challenge by using flexible and efficient tools and platforms. To answer this trend, the BigEarth project [2] aims to combine the advantages of high performance computing solutions with flexible processing description methodologies in order to reduce both task execution times and task definition time and effort. As a component of the BigEarth platform, WorDeL (Workflow Description Language) [3] is intended to offer a flexible, compact and modular approach to the task definition process. WorDeL, unlike other description alternatives such as Python or shell scripts, is oriented towards the description topologies, using them as abstractions for the processing programs. This feature is intended to make it an attractive alternative for users lacking in programming experience. By promoting modular designs, WorDeL not only makes the processing descriptions more user-readable and intuitive, but also helps organizing the processing tasks into independent sub-tasks, which can be executed in parallel on multi-processor platforms in order to improve execution times. As a BigEarth platform [4] component, WorDeL represents the means by which the user interacts with the system, describing processing algorithms in terms of existing operators and workflows [5], which are ultimately translated into sets of executable commands. The WorDeL language has been designed to help in the definition of compute-intensive, batch tasks which can be distributed and executed on high-performance, cloud or grid-based architectures in order to improve the processing time. Main references for further information: [1] Gorgan, D., "Flexible and Adaptive Processing of Earth Observation Data over High Performance Computation Architectures", International Conference and Exhibition Satellite 2015, August 17-19, Houston, Texas, USA. [2] Bigearth project - flexible processing of big earth data over high performance computing architectures. http://cgis.utcluj.ro/bigearth, (2014) [3] Nandra, C., Gorgan, D., "Workflow Description Language for Defining Big Earth Data Processing Tasks", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 461-468, (2015). [4] Bacu, V., Stefan, T., Gorgan, D., "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015). [5] Mihon, D., Bacu, V., Colceriu, V., Gorgan, D., "Modeling of Earth Observation Use Cases through the KEOPS System", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 455-460, (2015).
Tools and techniques for computational reproducibility.
Piccolo, Stephen R; Frampton, Michael B
2016-07-11
When reporting research findings, scientists document the steps they followed so that others can verify and build upon the research. When those steps have been described in sufficient detail that others can retrace the steps and obtain similar results, the research is said to be reproducible. Computers play a vital role in many research disciplines and present both opportunities and challenges for reproducibility. Computers can be programmed to execute analysis tasks, and those programs can be repeated and shared with others. The deterministic nature of most computer programs means that the same analysis tasks, applied to the same data, will often produce the same outputs. However, in practice, computational findings often cannot be reproduced because of complexities in how software is packaged, installed, and executed-and because of limitations associated with how scientists document analysis steps. Many tools and techniques are available to help overcome these challenges; here we describe seven such strategies. With a broad scientific audience in mind, we describe the strengths and limitations of each approach, as well as the circumstances under which each might be applied. No single strategy is sufficient for every scenario; thus we emphasize that it is often useful to combine approaches.
Has computational creativity successfully made it "Beyond the Fence" in musical theatre?
NASA Astrophysics Data System (ADS)
Jordanous, Anna
2017-10-01
A significant test for software is to task it with replicating human performance, as done recently with creative software and the commercial project Beyond the Fence (undertaken for a television documentary Computer Says Show). The remit of this project was to use computer software as much as possible to produce "the world's first computer-generated musical". Several creative systems were used to generate this musical, which was performed in London's West End in 2016. This paper considers the challenge of evaluating this project. Current computational creativity evaluation methods are ill-suited to evaluating projects that involve creative input from multiple systems and people. Following recent inspiration within computational creativity research from interaction design, here the DECIDE evaluation framework is applied to evaluate the Beyond the Fence project. Evaluation finds that the project was reasonably successful at achieving the task of using computational generation to produce a credible musical. Lessons have been learned for future computational creativity projects though, particularly for affording creative software more agency and enabling software to interact with other creative partners. Upon reflection, the DECIDE framework emerges as a useful evaluation "checklist" (if not a tangible operational methodology) for evaluating multiple creative systems participating in a creative task.
Comparative Modeling of Proteins: A Method for Engaging Students' Interest in Bioinformatics Tools
ERIC Educational Resources Information Center
Badotti, Fernanda; Barbosa, Alan Sales; Reis, André Luiz Martins; do Valle, Ítalo Faria; Ambrósio, Lara; Bitar, Mainá
2014-01-01
The huge increase in data being produced in the genomic era has produced a need to incorporate computers into the research process. Sequence generation, its subsequent storage, interpretation, and analysis are now entirely computer-dependent tasks. Universities from all over the world have been challenged to seek a way of encouraging students to…
ERIC Educational Resources Information Center
Thomas, Ally
2016-01-01
With the advent of the newly developed Common Core State Standards and the Next Generation Science Standards, innovative assessments, including technology-enhanced items and tasks, will be needed to meet the challenges of developing valid and reliable assessments in a world of computer-based testing. In a recent critique of the next generation…
NASA Astrophysics Data System (ADS)
Slater, Timothy F.; Loranz, D.; Prather, E. E.
2006-12-01
One of the most ardent challenges for astronomy teachers is to deeply and meaningfully assess students’ conceptual and quantitative understanding of astronomy topics. In an effort to uncover students’ actual understanding, members and affiliates of the Conceptual Astronomy and Physics Education Research (CAPER) Team at the University of Arizona and Truckee Meadows Community College are creating and field-testing innovative approaches to assessment. Leveraging from the highly successful work from physics education research, we are creating a series of tasks where students categorize a list describing common astronomical events or phenomenon; or vocabulary terms into context rich categories or conceptually rich sentences. These intellectually challenging tasks are being created to span the entire domain of topics in introductory astronomy for non-science majoring undergraduates. When completed, these sorting tasks and vocabulary-in-context activities will be able to be delivered via a drag-and-drop computer interface.
Simplified Key Management for Digital Access Control of Information Objects
2016-07-02
0001, Task BC-5-2283, “Architecture, Design of Services for Air Force Wide Distributed Systems,” for USAF HQ USAF SAF/CIO A6. The views, opinions...Challenges for Cloud Computing,” Lecture Notes in Engineering and Computer Science: Proceedings World Congress on Engineering and Computer Science 2011...P. Konieczny USAF HQ USAF SAF/CIO A6 11. SPONSOR’S / MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT Approved for public
Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.
Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav
2012-01-01
Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.
NASA Astrophysics Data System (ADS)
Halbrügge, Marc
2010-12-01
This paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.
A general method for assessing brain-computer interface performance and its limitations
NASA Astrophysics Data System (ADS)
Hill, N. Jeremy; Häuser, Ann-Katrin; Schalk, Gerwin
2014-04-01
Objective. When researchers evaluate brain-computer interface (BCI) systems, we want quantitative answers to questions such as: How good is the system’s performance? How good does it need to be? and: Is it capable of reaching the desired level in future? In response to the current lack of objective, quantitative, study-independent approaches, we introduce methods that help to address such questions. We identified three challenges: (I) the need for efficient measurement techniques that adapt rapidly and reliably to capture a wide range of performance levels; (II) the need to express results in a way that allows comparison between similar but non-identical tasks; (III) the need to measure the extent to which certain components of a BCI system (e.g. the signal processing pipeline) not only support BCI performance, but also potentially restrict the maximum level it can reach. Approach. For challenge (I), we developed an automatic staircase method that adjusted task difficulty adaptively along a single abstract axis. For challenge (II), we used the rate of information gain between two Bernoulli distributions: one reflecting the observed success rate, the other reflecting chance performance estimated by a matched random-walk method. This measure includes Wolpaw’s information transfer rate as a special case, but addresses the latter’s limitations including its restriction to item-selection tasks. To validate our approach and address challenge (III), we compared four healthy subjects’ performance using an EEG-based BCI, a ‘Direct Controller’ (a high-performance hardware input device), and a ‘Pseudo-BCI Controller’ (the same input device, but with control signals processed by the BCI signal processing pipeline). Main results. Our results confirm the repeatability and validity of our measures, and indicate that our BCI signal processing pipeline reduced attainable performance by about 33% (21 bits min-1). Significance. Our approach provides a flexible basis for evaluating BCI performance and its limitations, across a wide range of tasks and task difficulties.
2012-01-01
Background In rehabilitation, training intensity is usually adapted to optimize the trained system to attain better performance (overload principle). However, in balance rehabilitation, the level of intensity required during training exercises to optimize improvement in balance has rarely been studied, probably due to the difficulty in quantifying the stability level during these exercises. The goal of the present study was to test whether the stabilizing/destabilizing forces model could be used to analyze how stability is challenged during several exergames, that are more and more used in balance rehabilitation, and a dynamic functional task, such as gait. Methods Seven healthy older adults were evaluated with three-dimensional motion analysis during gait at natural and fast speed, and during three balance exergames (50/50 Challenge, Ski Slalom and Soccer). Mean and extreme values for stabilizing force, destabilizing force and the ratio of the two forces (stability index) were computed from kinematic and kinetic data to determine the mean and least level of dynamic, postural and overall balance stability, respectively. Results Mean postural stability was lower (lower mean destabilizing force) during the 50/50 Challenge game than during all the other tasks, but peak postural instability moments were less challenging during this game than during any of the other tasks, as shown by the minimum destabilizing force values. Dynamic stability was progressively more challenged (higher mean and maximum stabilizing force) from the 50/50 Challenge to the Soccer and Slalom games, to the natural gait speed task and to the fast gait speed task, increasing the overall stability difficulty (mean and minimum stability index) in the same manner. Conclusions The stabilizing/destabilizing forces model can be used to rate the level of balance requirements during different tasks such as gait or exergames. The results of our study showed that postural stability did not differ much between the evaluated tasks (except for the 50/50 Challenge), compared to dynamic stability, which was significantly less challenged during the games than during the functional tasks. Games with greater centre of mass displacements and changes in the base of support are likely to stimulate balance control enough to see improvements in balance during dynamic functional tasks, and could be tested in pathological populations with the approach used here. PMID:22607025
Two-Level Verification of Data Integrity for Data Storage in Cloud Computing
NASA Astrophysics Data System (ADS)
Xu, Guangwei; Chen, Chunlin; Wang, Hongya; Zang, Zhuping; Pang, Mugen; Jiang, Ping
Data storage in cloud computing can save capital expenditure and relive burden of storage management for users. As the lose or corruption of files stored may happen, many researchers focus on the verification of data integrity. However, massive users often bring large numbers of verifying tasks for the auditor. Moreover, users also need to pay extra fee for these verifying tasks beyond storage fee. Therefore, we propose a two-level verification of data integrity to alleviate these problems. The key idea is to routinely verify the data integrity by users and arbitrate the challenge between the user and cloud provider by the auditor according to the MACs and ϕ values. The extensive performance simulations show that the proposed scheme obviously decreases auditor's verifying tasks and the ratio of wrong arbitration.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127
Challenging Technology, and Technology Infusion into 21st Century
NASA Technical Reports Server (NTRS)
Chau, S. N.; Hunter, D. J.
2001-01-01
In preparing for the space exploration challenges of the next century, the National Aeronautics and Space Administration (NASA) Center for Integrated Space Micro-Systems (CISM) is chartered to develop advanced spacecraft systems that can be adapted for a large spectrum of future space missions. Enabling this task are revolutions in the miniaturization of electrical, mechanical, and computational functions. On the other hand, these revolutionary technologies usually have much lower readiness levels than those required by flight projects. The mission of the Advanced Micro Spacecraft (AMS) task in CISM is to bridge the readiness gap between advanced technologies and flight projects. Additional information is contained in the original extended abstract.
Now and Next-Generation Sequencing Techniques: Future of Sequence Analysis Using Cloud Computing
Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav
2012-01-01
Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed “cloud computing”) has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows. PMID:23248640
Editorial: Challenges for the usability of AR and VR for clinical neurosurgical procedures.
de Ribaupierre, Sandrine; Eagleson, Roy
2017-10-01
There are a number of challenges that must be faced when trying to develop AR and VR-based Neurosurgical simulators, Surgical Navigation Platforms, and "Smart OR" systems. Trying to simulate an operating room environment and surgical tasks in Augmented and Virtual Reality is a challenge many are attempting to solve, in order to train surgeons or help them operate. What are some of the needs of the surgeon, and what are the challenges encountered (human computer interface, perception, workflow, etc). We discuss these tradeoffs and conclude with critical remarks.
ERIC Educational Resources Information Center
Stiller, Klaus D.; Köster, Annamaria
2016-01-01
Online learning has gained importance in education over the last 20 years, but the well-known problem of high dropout rates still persists. According to the multi-dimensional learning tasks model, the cognitive (over)load of learners is essential to attrition when dealing with five challenges (e.g. technology, user interface) of an online training…
Methods for rapidly processing angular masks of next-generation galaxy surveys
NASA Astrophysics Data System (ADS)
Swanson, M. E. C.; Tegmark, Max; Hamilton, Andrew J. S.; Hill, J. Colin
2008-07-01
As galaxy surveys become larger and more complex, keeping track of the completeness, magnitude limit and other survey parameters as a function of direction on the sky becomes an increasingly challenging computational task. For example, typical angular masks of the Sloan Digital Sky Survey contain about N = 300000 distinct spherical polygons. Managing masks with such large numbers of polygons becomes intractably slow, particularly for tasks that run in time with a naive algorithm, such as finding which polygons overlap each other. Here we present a `divide-and-conquer' solution to this challenge: we first split the angular mask into pre-defined regions called `pixels', such that each polygon is in only one pixel, and then perform further computations, such as checking for overlap, on the polygons within each pixel separately. This reduces tasks to , and also reduces the important task of determining in which polygon(s) a point on the sky lies from to , resulting in significant computational speedup. Additionally, we present a method to efficiently convert any angular mask to and from the popular HEALPIX format. This method can be generically applied to convert to and from any desired spherical pixelization. We have implemented these techniques in a new version of the MANGLE software package, which is freely available at http://space.mit.edu/home/tegmark/mangle/, along with complete documentation and example applications. These new methods should prove quite useful to the astronomical community, and since MANGLE is a generic tool for managing angular masks on a sphere, it has the potential to benefit terrestrial mapmaking applications as well.
Enterprise Cloud Architecture for Chinese Ministry of Railway
NASA Astrophysics Data System (ADS)
Shan, Xumei; Liu, Hefeng
Enterprise like PRC Ministry of Railways (MOR), is facing various challenges ranging from highly distributed computing environment and low legacy system utilization, Cloud Computing is increasingly regarded as one workable solution to address this. This article describes full scale cloud solution with Intel Tashi as virtual machine infrastructure layer, Hadoop HDFS as computing platform, and self developed SaaS interface, gluing virtual machine and HDFS with Xen hypervisor. As a result, on demand computing task application and deployment have been tackled per MOR real working scenarios at the end of article.
NASA Astrophysics Data System (ADS)
Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Tovar, Benjamin; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2017-10-01
The University of Notre Dame (ND) CMS group operates a modest-sized Tier-3 site suitable for local, final-stage analysis of CMS data. However, through the ND Center for Research Computing (CRC), Notre Dame researchers have opportunistic access to roughly 25k CPU cores of computing and a 100 Gb/s WAN network link. To understand the limits of what might be possible in this scenario, we undertook to use these resources for a wide range of CMS computing tasks from user analysis through large-scale Monte Carlo production (including both detector simulation and data reconstruction.) We will discuss the challenges inherent in effectively utilizing CRC resources for these tasks and the solutions deployed to overcome them.
Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola
2016-01-01
Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Harold C.; Ibanez, Daniel Alejandro
This report documents the ASC/ATDM Kokkos deliverable "Production Portable Dy- namic Task DAG Capability." This capability enables applications to create and execute a dynamic task DAG ; a collection of heterogeneous computational tasks with a directed acyclic graph (DAG) of "execute after" dependencies where tasks and their dependencies are dynamically created and destroyed as tasks execute. The Kokkos task scheduler executes the dynamic task DAG on the target execution resource; e.g. a multicore CPU, a manycore CPU such as Intel's Knights Landing (KNL), or an NVIDIA GPU. Several major technical challenges had to be addressed during development of Kokkos' Taskmore » DAG capability: (1) portability to a GPU with it's simplified hardware and micro- runtime, (2) thread-scalable memory allocation and deallocation from a bounded pool of memory, (3) thread-scalable scheduler for dynamic task DAG, (4) usability by applications.« less
Computing Nash equilibria through computational intelligence methods
NASA Astrophysics Data System (ADS)
Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.
2005-03-01
Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.
Computational characterization of ordered nanostructured surfaces
NASA Astrophysics Data System (ADS)
Mohieddin Abukhdeir, Nasser
2016-08-01
A vital and challenging task for materials researchers is to determine relationships between material characteristics and desired properties. While the measurement and assessment of material properties can be complex, quantitatively characterizing their structure is frequently a more challenging task. This issue is magnified for materials researchers in the areas of nanoscience and nanotechnology, where material structure is further complicated by phenomena such as self-assembly, collective behavior, and measurement uncertainty. Recent progress has been made in this area for both self-assembled and nanostructured surfaces due to increasing accessibility of imaging techniques at the nanoscale. In this context, recent advances in nanomaterial surface structure characterization are reviewed including the development of new theory and image processing methods.
Multi-task learning with group information for human action recognition
NASA Astrophysics Data System (ADS)
Qian, Li; Wu, Song; Pu, Nan; Xu, Shulin; Xiao, Guoqiang
2018-04-01
Human action recognition is an important and challenging task in computer vision research, due to the variations in human motion performance, interpersonal differences and recording settings. In this paper, we propose a novel multi-task learning framework with group information (MTL-GI) for accurate and efficient human action recognition. Specifically, we firstly obtain group information through calculating the mutual information according to the latent relationship between Gaussian components and action categories, and clustering similar action categories into the same group by affinity propagation clustering. Additionally, in order to explore the relationships of related tasks, we incorporate group information into multi-task learning. Experimental results evaluated on two popular benchmarks (UCF50 and HMDB51 datasets) demonstrate the superiority of our proposed MTL-GI framework.
A Symbolic Model of the Nonconscious Acquisition of Information.
ERIC Educational Resources Information Center
Ling, Charles X.; Marinov, Marin
1994-01-01
Challenges Smolensky's theory that human intuitive/nonconscious cognitive processes can only be accurately explained in terms of subsymbolic computations in artificial neural networks. Symbolic learning models of two cognitive tasks involving nonconscious acquisition of information are presented: learning production rules and artificial finite…
Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)
NASA Astrophysics Data System (ADS)
Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.
2013-12-01
This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.
Goal-Directed Decision Making with Spiking Neurons.
Friedrich, Johannes; Lengyel, Máté
2016-02-03
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.
Goal-Directed Decision Making with Spiking Neurons
Lengyel, Máté
2016-01-01
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636
2016-01-01
Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome. PMID:27051515
Nickerson, David; Atalag, Koray; de Bono, Bernard; Geiger, Jörg; Goble, Carole; Hollmann, Susanne; Lonien, Joachim; Müller, Wolfgang; Regierer, Babette; Stanford, Natalie J; Golebiewski, Martin; Hunter, Peter
2016-04-06
Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome.
A haptic interface for virtual simulation of endoscopic surgery.
Rosenberg, L B; Stredney, D
1996-01-01
Virtual reality can be described as a convincingly realistic and naturally interactive simulation in which the user is given a first person illusion of being immersed within a computer generated environment While virtual reality systems offer great potential to reduce the cost and increase the quality of medical training, many technical challenges must be overcome before such simulation platforms offer effective alternatives to more traditional training means. A primary challenge in developing effective virtual reality systems is designing the human interface hardware which allows rich sensory information to be presented to users in natural ways. When simulating a given manual procedure, task specific human interface requirements dictate task specific human interface hardware. The following paper explores the design of human interface hardware that satisfies the task specific requirements of virtual reality simulation of Endoscopic surgical procedures. Design parameters were derived through direct cadaver studies and interviews with surgeons. Final hardware design is presented.
A novel strategy for load balancing of distributed medical applications.
Logeswaran, Rajasvaran; Chen, Li-Choo
2012-04-01
Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.
Computer-task testing of rhesus monkeys (Macaca mulatta) in the social milieu.
Washburn, D A; Harper, S; Rumbaugh, D M
1994-07-01
Previous research has demonstrated that a behavior and performance testing paradigm, in which rhesus monkeys (Macaca mulatta) manipulate a joystick to respond to computer-generated stimuli, provides environmental enrichment and supports the psychological well-being of captive research animals. The present study was designed to determine whether computer-task activity would be affected by pair-housing animals that had previously been tested only in their single-animal home cages. No differences were observed in productivity or performance levels as a function of housing condition, even when the animals were required to "self-identify" prior to performing each trial. The data indicate that cognitive challenge and control are as preferred by the animals as social opportunities, and that, together with comfort/health considerations, each must be addressed for the assurance of psychological well-being.
Big Data in the Earth Observing System Data and Information System
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Baynes, Katie; McInerney, Mark
2016-01-01
Approaches that are being pursued for the Earth Observing System Data and Information System (EOSDIS) data system to address the challenges of Big Data were presented to the NASA Big Data Task Force. Cloud prototypes are underway to tackle the volume challenge of Big Data. However, advances in computer hardware or cloud won't help (much) with variety. Rather, interoperability standards, conventions, and community engagement are the key to addressing variety.
The Challenges of Human-Autonomy Teaming
NASA Technical Reports Server (NTRS)
Vera, Alonso
2017-01-01
Machine intelligence is improving rapidly based on advances in big data analytics, deep learning algorithms, networked operations, and continuing exponential growth in computing power (Moores Law). This growth in the power and applicability of increasingly intelligent systems will change the roles humans, shifting them to tasks where adaptive problem solving, reasoning and decision-making is required. This talk will address the challenges involved in engineering autonomous systems that function effectively with humans in aeronautics domains.
Reinforcement Learning and Episodic Memory in Humans and Animals: An Integrative Framework.
Gershman, Samuel J; Daw, Nathaniel D
2017-01-03
We review the psychology and neuroscience of reinforcement learning (RL), which has experienced significant progress in the past two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, one challenge in the study of RL is computational: The simplicity of these tasks ignores important aspects of reinforcement learning in the real world: (a) State spaces are high-dimensional, continuous, and partially observable; this implies that (b) data are relatively sparse and, indeed, precisely the same situation may never be encountered twice; furthermore, (c) rewards depend on the long-term consequences of actions in ways that violate the classical assumptions that make RL tractable. A seemingly distinct challenge is that, cognitively, theories of RL have largely involved procedural and semantic memory, the way in which knowledge about action values or world models extracted gradually from many experiences can drive choice. This focus on semantic memory leaves out many aspects of memory, such as episodic memory, related to the traces of individual events. We suggest that these two challenges are related. The computational challenge can be dealt with, in part, by endowing RL systems with episodic memory, allowing them to (a) efficiently approximate value functions over complex state spaces, (b) learn with very little data, and (c) bridge long-term dependencies between actions and rewards. We review the computational theory underlying this proposal and the empirical evidence to support it. Our proposal suggests that the ubiquitous and diverse roles of memory in RL may function as part of an integrated learning system.
Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program
NASA Technical Reports Server (NTRS)
Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.
2010-01-01
The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the Aeroelasticity Branch will examine other experimental efforts within the Subsonic Fixed Wing (SFW) program (such as testing of the NASA Common Research Model (CRM)) and other NASA programs and assess aeroelasticity issues and research topics.
Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth
2017-09-13
Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.
NASA Astrophysics Data System (ADS)
Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth
2017-09-01
Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.
Andersen, Pia; Lindgaard, Anne-Mette; Prgomet, Mirela; Creswick, Nerida; Westbrook, Johanna I
2009-08-04
Selecting the right mix of stationary and mobile computing devices is a significant challenge for system planners and implementers. There is very limited research evidence upon which to base such decisions. We aimed to investigate the relationships between clinician role, clinical task, and selection of a computer hardware device in hospital wards. Twenty-seven nurses and eight doctors were observed for a total of 80 hours as they used a range of computing devices to access a computerized provider order entry system on two wards at a major Sydney teaching hospital. Observers used a checklist to record the clinical tasks completed, devices used, and location of the activities. Field notes were also documented during observations. Semi-structured interviews were conducted after observation sessions. Assessment of the physical attributes of three devices-stationary PCs, computers on wheels (COWs) and tablet PCs-was made. Two types of COWs were available on the wards: generic COWs (laptops mounted on trolleys) and ergonomic COWs (an integrated computer and cart device). Heuristic evaluation of the user interfaces was also carried out. The majority (93.1%) of observed nursing tasks were conducted using generic COWs. Most nursing tasks were performed in patients' rooms (57%) or in the corridors (36%), with a small percentage at a patient's bedside (5%). Most nursing tasks related to the preparation and administration of drugs. Doctors on ward rounds conducted 57.3% of observed clinical tasks on generic COWs and 35.9% on tablet PCs. On rounds, 56% of doctors' tasks were performed in the corridors, 29% in patients' rooms, and 3% at the bedside. Doctors not on a ward round conducted 93.6% of tasks using stationary PCs, most often within the doctors' office. Nurses and doctors were observed performing workarounds, such as transcribing medication orders from the computer to paper. The choice of device was related to clinical role, nature of the clinical task, degree of mobility required, including where task completion occurs, and device design. Nurses' work, and clinical tasks performed by doctors during ward rounds, require highly mobile computer devices. Nurses and doctors on ward rounds showed a strong preference for generic COWs over all other devices. Tablet PCs were selected by doctors for only a small proportion of clinical tasks. Even when using mobile devices clinicians completed a very low proportion of observed tasks at the bedside. The design of the devices and ward space configurations place limitations on how and where devices are used and on the mobility of clinical work. In such circumstances, clinicians will initiate workarounds to compensate. In selecting hardware devices, consideration should be given to who will be using the devices, the nature of their work, and the physical layout of the ward.
Andersen, Pia; Lindgaard, Anne-Mette; Prgomet, Mirela; Creswick, Nerida
2009-01-01
Background Selecting the right mix of stationary and mobile computing devices is a significant challenge for system planners and implementers. There is very limited research evidence upon which to base such decisions. Objective We aimed to investigate the relationships between clinician role, clinical task, and selection of a computer hardware device in hospital wards. Methods Twenty-seven nurses and eight doctors were observed for a total of 80 hours as they used a range of computing devices to access a computerized provider order entry system on two wards at a major Sydney teaching hospital. Observers used a checklist to record the clinical tasks completed, devices used, and location of the activities. Field notes were also documented during observations. Semi-structured interviews were conducted after observation sessions. Assessment of the physical attributes of three devices—stationary PCs, computers on wheels (COWs) and tablet PCs—was made. Two types of COWs were available on the wards: generic COWs (laptops mounted on trolleys) and ergonomic COWs (an integrated computer and cart device). Heuristic evaluation of the user interfaces was also carried out. Results The majority (93.1%) of observed nursing tasks were conducted using generic COWs. Most nursing tasks were performed in patients’ rooms (57%) or in the corridors (36%), with a small percentage at a patient’s bedside (5%). Most nursing tasks related to the preparation and administration of drugs. Doctors on ward rounds conducted 57.3% of observed clinical tasks on generic COWs and 35.9% on tablet PCs. On rounds, 56% of doctors’ tasks were performed in the corridors, 29% in patients’ rooms, and 3% at the bedside. Doctors not on a ward round conducted 93.6% of tasks using stationary PCs, most often within the doctors’ office. Nurses and doctors were observed performing workarounds, such as transcribing medication orders from the computer to paper. Conclusions The choice of device was related to clinical role, nature of the clinical task, degree of mobility required, including where task completion occurs, and device design. Nurses’ work, and clinical tasks performed by doctors during ward rounds, require highly mobile computer devices. Nurses and doctors on ward rounds showed a strong preference for generic COWs over all other devices. Tablet PCs were selected by doctors for only a small proportion of clinical tasks. Even when using mobile devices clinicians completed a very low proportion of observed tasks at the bedside. The design of the devices and ward space configurations place limitations on how and where devices are used and on the mobility of clinical work. In such circumstances, clinicians will initiate workarounds to compensate. In selecting hardware devices, consideration should be given to who will be using the devices, the nature of their work, and the physical layout of the ward. PMID:19674959
Scaffolding Learning by Modelling: The Effects of Partially Worked-out Models
ERIC Educational Resources Information Center
Mulder, Yvonne G.; Bollen, Lars; de Jong, Ton; Lazonder, Ard W.
2016-01-01
Creating executable computer models is a potentially powerful approach to science learning. Learning by modelling is also challenging because students can easily get overwhelmed by the inherent complexities of the task. This study investigated whether offering partially worked-out models can facilitate students' modelling practices and promote…
Methodological Reflections: Designing and Understanding Computer-Supported Collaborative Learning
ERIC Educational Resources Information Center
Hamalainen, Raija
2012-01-01
Learning involves more than just a small group of participants, which makes designing and managing collaborative learning processes in higher education a challenging task. As a result, emerging concerns in current research have pointed increasingly to teacher orchestrated learning processes in naturalistic learning settings. In line with this…
The assessment of risk from dermal exposure for thousands of chemicals, such as consumer products, due to their potential to enter the environment as contaminants is a daunting task. A strategy has been developed to integrate high-throughput technologies with toxicity, known as ...
Security: Progress and Challenges
ERIC Educational Resources Information Center
Luker, Mark A.
2004-01-01
The Homepage column in the March/April 2003 issue of "EDUCAUSE Review" explained the national implication of security vulnerabilities in higher education and the role of the EDUCAUSE/Internet2 Computer and Network Security Task Force in representing the higher education sector in the development of the National Strategy to Secure Cyberspace. Among…
Quantum Computation: Entangling with the Future
NASA Technical Reports Server (NTRS)
Jiang, Zhang
2017-01-01
Commercial applications of quantum computation have become viable due to the rapid progress of the field in the recent years. Efficient quantum algorithms are discovered to cope with the most challenging real-world problems that are too hard for classical computers. Manufactured quantum hardware has reached unprecedented precision and controllability, enabling fault-tolerant quantum computation. Here, I give a brief introduction on what principles in quantum mechanics promise its unparalleled computational power. I will discuss several important quantum algorithms that achieve exponential or polynomial speedup over any classical algorithm. Building a quantum computer is a daunting task, and I will talk about the criteria and various implementations of quantum computers. I conclude the talk with near-future commercial applications of a quantum computer.
Plasmonic computing of spatial differentiation
NASA Astrophysics Data System (ADS)
Zhu, Tengfeng; Zhou, Yihan; Lou, Yijie; Ye, Hui; Qiu, Min; Ruan, Zhichao; Fan, Shanhui
2017-05-01
Optical analog computing offers high-throughput low-power-consumption operation for specialized computational tasks. Traditionally, optical analog computing in the spatial domain uses a bulky system of lenses and filters. Recent developments in metamaterials enable the miniaturization of such computing elements down to a subwavelength scale. However, the required metamaterial consists of a complex array of meta-atoms, and direct demonstration of image processing is challenging. Here, we show that the interference effects associated with surface plasmon excitations at a single metal-dielectric interface can perform spatial differentiation. And we experimentally demonstrate edge detection of an image without any Fourier lens. This work points to a simple yet powerful mechanism for optical analog computing at the nanoscale.
Object recognition based on Google's reverse image search and image similarity
NASA Astrophysics Data System (ADS)
Horváth, András.
2015-12-01
Image classification is one of the most challenging tasks in computer vision and a general multiclass classifier could solve many different tasks in image processing. Classification is usually done by shallow learning for predefined objects, which is a difficult task and very different from human vision, which is based on continuous learning of object classes and one requires years to learn a large taxonomy of objects which are not disjunct nor independent. In this paper I present a system based on Google image similarity algorithm and Google image database, which can classify a large set of different objects in a human like manner, identifying related classes and taxonomies.
Low latency network and distributed storage for next generation HPC systems: the ExaNeSt project
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Pisani, F.; Simula, F.; Vicini, P.; Navaridas, J.; Chaix, F.; Chrysos, N.; Katevenis, M.; Papaeustathiou, V.
2017-10-01
With processor architecture evolution, the HPC market has undergone a paradigm shift. The adoption of low-cost, Linux-based clusters extended the reach of HPC from its roots in modelling and simulation of complex physical systems to a broader range of industries, from biotechnology, cloud computing, computer analytics and big data challenges to manufacturing sectors. In this perspective, the near future HPC systems can be envisioned as composed of millions of low-power computing cores, densely packed — meaning cooling by appropriate technology — with a tightly interconnected, low latency and high performance network and equipped with a distributed storage architecture. Each of these features — dense packing, distributed storage and high performance interconnect — represents a challenge, made all the harder by the need to solve them at the same time. These challenges lie as stumbling blocks along the road towards Exascale-class systems; the ExaNeSt project acknowledges them and tasks itself with investigating ways around them.
Granata, C; Pino, M; Legouverneur, G; Vidal, J-S; Bidaud, P; Rigaud, A-S
2013-01-01
Socially assistive robotics for elderly care is a growing field. However, although robotics has the potential to support elderly in daily tasks by offering specific services, the development of usable interfaces is still a challenge. Since several factors such as age or disease-related changes in perceptual or cognitive abilities and familiarity with computer technologies influence technology use they must be considered when designing interfaces for these users. This paper presents findings from usability testing of two different services provided by a social assistive robot intended for elderly with cognitive impairment: a grocery shopping list and an agenda application. The main goal of this study is to identify the usability problems of the robot interface for target end-users as well as to isolate the human factors that affect the use of the technology by elderly. Socio-demographic characteristics and computer experience were examined as factors that could have an influence on task performance. A group of 11 elderly persons with Mild Cognitive Impairment and a group of 11 cognitively healthy elderly individuals took part in this study. Performance measures (task completion time and number of errors) were collected. Cognitive profile, age and computer experience were found to impact task performance. Participants with cognitive impairment achieved the tasks committing more errors than cognitively healthy elderly. Instead younger participants and those with previous computer experience were faster at completing the tasks confirming previous findings in the literature. The overall results suggested that interfaces and contents of the services assessed were usable by older adults with cognitive impairment. However, some usability problems were identified and should be addressed to better meet the needs and capacities of target end-users.
2011-11-01
based perception of each team member‟s behavior and physiology with the goal of predicting unobserved variables (e.g., cognitive state). Along with...sensing technologies are showing promise as enablers of computer-based perception of each team member‟s behavior and physiology with the goal...an essential element of team performance. The perception that other team members may be unable to perform their tasks is detrimental to trust and
NASA Astrophysics Data System (ADS)
Traverso, A.; Lopez Torres, E.; Fantacci, M. E.; Cerello, P.
2017-05-01
Lung cancer is one of the most lethal types of cancer, because its early diagnosis is not good enough. In fact, the detection of pulmonary nodule, potential lung cancers, in Computed Tomography scans is a very challenging and time-consuming task for radiologists. To support radiologists, researchers have developed Computer-Aided Diagnosis (CAD) systems for the automated detection of pulmonary nodules in chest Computed Tomography scans. Despite the high level of technological developments and the proved benefits on the overall detection performance, the usage of Computer-Aided Diagnosis in clinical practice is far from being a common procedure. In this paper we investigate the causes underlying this discrepancy and present a solution to tackle it: the M5L WEB- and Cloud-based on-demand Computer-Aided Diagnosis. In addition, we prove how the combination of traditional imaging processing techniques with state-of-art advanced classification algorithms allows to build a system whose performance could be much larger than any Computer-Aided Diagnosis developed so far. This outcome opens the possibility to use the CAD as clinical decision support for radiologists.
Improved Safety Margin Characterization of Risk from Loss of Offsite Power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Paul
Original intent: The original intent of this task was “support of the Risk-Informed Safety Margin Characteristic (RISMC) methodology in order” “to address … efficiency of computation so that more accurate and cost-effective techniques can be used to address safety margin characterizations” (S. M. Hess et al., “Risk-Informed Safety Margin Characterization,” Procs. ICONE17, Brussels, July 2009, CD format). It was intended that “in Task 1 itself this improvement will be directed toward upon the very important issue of Loss of Offsite Power (LOOP) events,” more specifically toward the challenge of efficient computation of the multidimensional nonrecovery integral that has been discussedmore » by many previous contributors to the theory of nuclear safety. It was further envisioned that “three different computational approaches will be explored,” corresponding to the three subtasks listed below; deliverables were tied to the individual subtasks.« less
Exploring Asynchronous Many-Task Runtime Systems toward Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knight, Samuel; Baker, Gavin Matthew; Gamell, Marc
2015-10-01
Major exascale computing reports indicate a number of software challenges to meet the dramatic change of system architectures in near future. While several-orders-of-magnitude increase in parallelism is the most commonly cited of those, hurdles also include performance heterogeneity of compute nodes across the system, increased imbalance between computational capacity and I/O capabilities, frequent system interrupts, and complex hardware architectures. Asynchronous task-parallel programming models show a great promise in addressing these issues, but are not yet fully understood nor developed su ciently for computational science and engineering application codes. We address these knowledge gaps through quantitative and qualitative exploration of leadingmore » candidate solutions in the context of engineering applications at Sandia. In this poster, we evaluate MiniAero code ported to three leading candidate programming models (Charm++, Legion and UINTAH) to examine the feasibility of these models that permits insertion of new programming model elements into an existing code base.« less
Performing a global barrier operation in a parallel computer
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2014-12-09
Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.
Silberstein, M.; Tzemach, A.; Dovgolevsky, N.; Fishelson, M.; Schuster, A.; Geiger, D.
2006-01-01
Computation of LOD scores is a valuable tool for mapping disease-susceptibility genes in the study of Mendelian and complex diseases. However, computation of exact multipoint likelihoods of large inbred pedigrees with extensive missing data is often beyond the capabilities of a single computer. We present a distributed system called “SUPERLINK-ONLINE,” for the computation of multipoint LOD scores of large inbred pedigrees. It achieves high performance via the efficient parallelization of the algorithms in SUPERLINK, a state-of-the-art serial program for these tasks, and through the use of the idle cycles of thousands of personal computers. The main algorithmic challenge has been to efficiently split a large task for distributed execution in a highly dynamic, nondedicated running environment. Notably, the system is available online, which allows computationally intensive analyses to be performed with no need for either the installation of software or the maintenance of a complicated distributed environment. As the system was being developed, it was extensively tested by collaborating medical centers worldwide on a variety of real data sets, some of which are presented in this article. PMID:16685644
Exploiting Locality in Quantum Computation for Quantum Chemistry.
McClean, Jarrod R; Babbush, Ryan; Love, Peter J; Aspuru-Guzik, Alán
2014-12-18
Accurate prediction of chemical and material properties from first-principles quantum chemistry is a challenging task on traditional computers. Recent developments in quantum computation offer a route toward highly accurate solutions with polynomial cost; however, this solution still carries a large overhead. In this Perspective, we aim to bring together known results about the locality of physical interactions from quantum chemistry with ideas from quantum computation. We show that the utilization of spatial locality combined with the Bravyi-Kitaev transformation offers an improvement in the scaling of known quantum algorithms for quantum chemistry and provides numerical examples to help illustrate this point. We combine these developments to improve the outlook for the future of quantum chemistry on quantum computers.
Overview of the INEX 2008 Book Track
NASA Astrophysics Data System (ADS)
Kazai, Gabriella; Doucet, Antoine; Landoni, Monica
This paper provides an overview of the INEX 2008 Book Track. Now in its second year, the track aimed at broadening its scope by investigating topics of interest in the fields of information retrieval, human computer interaction, digital libraries, and eBooks. The main topics of investigation were defined around challenges for supporting users in reading, searching, and navigating the full texts of digitized books. Based on these themes, four tasks were defined: 1) The Book Retrieval task aimed at comparing traditional and book-specific retrieval approaches, 2) the Page in Context task aimed at evaluating the value of focused retrieval approaches for searching books, 3) the Structure Extraction task aimed to test automatic techniques for deriving structure from OCR and layout information, and 4) the Active Reading task aimed to explore suitable user interfaces for eBooks enabling reading, annotation, review, and summary across multiple books. We report on the setup and results of each of these tasks.
Task conflict and proactive control: A computational theory of the Stroop task.
Kalanthroff, Eyal; Davelaar, Eddy J; Henik, Avishai; Goldfarb, Liat; Usher, Marius
2018-01-01
The Stroop task is a central experimental paradigm used to probe cognitive control by measuring the ability of participants to selectively attend to task-relevant information and inhibit automatic task-irrelevant responses. Research has revealed variability in both experimental manipulations and individual differences. Here, we focus on a particular source of Stroop variability, the reverse-facilitation (RF; faster responses to nonword neutral stimuli than to congruent stimuli), which has recently been suggested as a signature of task conflict. We first review the literature that shows RF variability in the Stroop task, both with regard to experimental manipulations and to individual differences. We suggest that task conflict variability can be understood as resulting from the degree of proactive control that subjects recruit in advance of the Stroop stimulus. When the proactive control is high, task conflict does not arise (or is resolved very quickly), resulting in regular Stroop facilitation. When proactive control is low, task conflict emerges, leading to a slow-down in congruent and incongruent (but not in neutral) trials and thus to Stroop RF. To support this suggestion, we present a computational model of the Stroop task, which includes the resolution of task conflict and its modulation by proactive control. Results show that our model (a) accounts for the variability in Stroop-RF reported in the experimental literature, and (b) solves a challenge to previous Stroop models-their ability to account for reaction time distributional properties. Finally, we discuss theoretical implications to Stroop measures and control deficits observed in some psychopathologies. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
High performance computing environment for multidimensional image analysis
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-01-01
Background The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. Results We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup. Conclusion Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets. PMID:17634099
High performance computing environment for multidimensional image analysis.
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-07-10
The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.
Computerized Education in a Low Birth Rate Society.
ERIC Educational Resources Information Center
Asimov, Isaac
1979-01-01
In Asimov's scenario of the future, the percentage of older people will increase and computer technology will relieve humanity of many tasks. The concept of education as a lifelong, natural, and enjoyable process will thus become paramount, and university extension and continuing education institutions will be challenged to meet the needs of an…
The Magnitude Response Learning Tool for DSP Education: A Case Study
ERIC Educational Resources Information Center
Kulmer, Florian; Wurzer, Christian Gun; Geiger, Bernhard C.
2016-01-01
Many concepts in digital signal processing are intuitive, despite being mathematically challenging. The lecturer not only has to teach the complicated math but should also help students develop intuition about the concept. To aid the lecturer in this task, the Magnitude Response Learning Tool has been introduced, a computer-based learning game…
Secure distributed genome analysis for GWAS and sequence comparison computation.
Zhang, Yihua; Blanton, Marina; Almashaqbeh, Ghada
2015-01-01
The rapid increase in the availability and volume of genomic data makes significant advances in biomedical research possible, but sharing of genomic data poses challenges due to the highly sensitive nature of such data. To address the challenges, a competition for secure distributed processing of genomic data was organized by the iDASH research center. In this work we propose techniques for securing computation with real-life genomic data for minor allele frequency and chi-squared statistics computation, as well as distance computation between two genomic sequences, as specified by the iDASH competition tasks. We put forward novel optimizations, including a generalization of a version of mergesort, which might be of independent interest. We provide implementation results of our techniques based on secret sharing that demonstrate practicality of the suggested protocols and also report on performance improvements due to our optimization techniques. This work describes our techniques, findings, and experimental results developed and obtained as part of iDASH 2015 research competition to secure real-life genomic computations and shows feasibility of securely computing with genomic data in practice.
Secure distributed genome analysis for GWAS and sequence comparison computation
2015-01-01
Background The rapid increase in the availability and volume of genomic data makes significant advances in biomedical research possible, but sharing of genomic data poses challenges due to the highly sensitive nature of such data. To address the challenges, a competition for secure distributed processing of genomic data was organized by the iDASH research center. Methods In this work we propose techniques for securing computation with real-life genomic data for minor allele frequency and chi-squared statistics computation, as well as distance computation between two genomic sequences, as specified by the iDASH competition tasks. We put forward novel optimizations, including a generalization of a version of mergesort, which might be of independent interest. Results We provide implementation results of our techniques based on secret sharing that demonstrate practicality of the suggested protocols and also report on performance improvements due to our optimization techniques. Conclusions This work describes our techniques, findings, and experimental results developed and obtained as part of iDASH 2015 research competition to secure real-life genomic computations and shows feasibility of securely computing with genomic data in practice. PMID:26733307
Continuing challenges for computer-based neuropsychological tests.
Letz, Richard
2003-08-01
A number of issues critical to the development of computer-based neuropsychological testing systems that remain continuing challenges to their widespread use in occupational and environmental health are reviewed. Several computer-based neuropsychological testing systems have been developed over the last 20 years, and they have contributed substantially to the study of neurologic effects of a number of environmental exposures. However, many are no longer supported and do not run on contemporary personal computer operating systems. Issues that are continuing challenges for development of computer-based neuropsychological tests in environmental and occupational health are discussed: (1) some current technological trends that generally make test development more difficult; (2) lack of availability of usable speech recognition of the type required for computer-based testing systems; (3) implementing computer-based procedures and tasks that are improvements over, not just adaptations of, their manually-administered predecessors; (4) implementing tests of a wider range of memory functions than the limited range now available; (5) paying more attention to motivational influences that affect the reliability and validity of computer-based measurements; and (6) increasing the usability of and audience for computer-based systems. Partial solutions to some of these challenges are offered. The challenges posed by current technological trends are substantial and generally beyond the control of testing system developers. Widespread acceptance of the "tablet PC" and implementation of accurate small vocabulary, discrete, speaker-independent speech recognition would enable revolutionary improvements to computer-based testing systems, particularly for testing memory functions not covered in existing systems. Dynamic, adaptive procedures, particularly ones based on item-response theory (IRT) and computerized-adaptive testing (CAT) methods, will be implemented in new tests that will be more efficient, reliable, and valid than existing test procedures. These additional developments, along with implementation of innovative reporting formats, are necessary for more widespread acceptance of the testing systems.
Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook
2014-01-01
Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data. PMID:25225874
Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook
2014-09-15
Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.
Robust and efficient overset grid assembly for partitioned unstructured meshes
NASA Astrophysics Data System (ADS)
Roget, Beatrice; Sitaraman, Jayanarayanan
2014-03-01
This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.
NASA Astrophysics Data System (ADS)
Li, Guoliang; Xing, Lining; Chen, Yingwu
2017-11-01
The autonomicity of self-scheduling on Earth observation satellite and the increasing scale of satellite network attract much attention from researchers in the last decades. In reality, the limited onboard computational resource presents challenge for the online scheduling algorithm. This study considered online scheduling problem for a single autonomous Earth observation satellite within satellite network environment. It especially addressed that the urgent tasks arrive stochastically during the scheduling horizon. We described the problem and proposed a hybrid online scheduling mechanism with revision and progressive techniques to solve this problem. The mechanism includes two decision policies, a when-to-schedule policy combining periodic scheduling and critical cumulative number-based event-driven rescheduling, and a how-to-schedule policy combining progressive and revision approaches to accommodate two categories of task: normal tasks and urgent tasks. Thus, we developed two heuristic (re)scheduling algorithms and compared them with other generally used techniques. Computational experiments indicated that the into-scheduling percentage of urgent tasks in the proposed mechanism is much higher than that in periodic scheduling mechanism, and the specific performance is highly dependent on some mechanism-relevant and task-relevant factors. For the online scheduling, the modified weighted shortest imaging time first and dynamic profit system benefit heuristics outperformed the others on total profit and the percentage of successfully scheduled urgent tasks.
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149
Optimization of tomographic reconstruction workflows on geographically distributed resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Parallel fuzzy connected image segmentation on GPU
Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.
2011-01-01
Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA’s compute unified device Architecture (cuda) platform for segmenting medical image data sets. Methods: In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as cuda kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Results: Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. Conclusions: The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set. PMID:21859037
Parallel fuzzy connected image segmentation on GPU.
Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W
2011-07-01
Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.
Multi-task feature selection in microarray data by binary integer programming.
Lan, Liang; Vucetic, Slobodan
2013-12-20
A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.
Nicholas, Marjorie; Sinotte, Michele P.; Helm-Estabrooks, Nancy
2011-01-01
Learning how to use a computer-based communication system can be challenging for people with severe aphasia even if the system is not word-based. This study explored cognitive and linguistic factors relative to how they affected individual patients’ ability to communicate expressively using C-Speak Aphasia, (CSA), an alternative communication computer program that is primarily picture-based. Ten individuals with severe non-fluent aphasia received at least six months of training with CSA. To assess carryover of training, untrained functional communication tasks (i.e., answering autobiographical questions, describing pictures, making telephone calls, describing a short video, and two writing tasks) were repeatedly probed in two conditions: 1) using CSA in addition to natural forms of communication, and 2) using only natural forms of communication, e.g., speaking, writing, gesturing, drawing. Four of the ten participants communicated more information on selected probe tasks using CSA than they did without the computer. Response to treatment also was examined in relation to baseline measures of non-linguistic executive function skills, pictorial semantic abilities, and auditory comprehension. Only nonlinguistic executive function skills were significantly correlated with treatment response. PMID:21506045
Bag-of-visual-ngrams for histopathology image classification
NASA Astrophysics Data System (ADS)
López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper describes an extension of the Bag-of-Visual-Words (BoVW) representation for image categorization (IC) of histophatology images. This representation is one of the most used approaches in several high-level computer vision tasks. However, the BoVW representation has an important limitation: the disregarding of spatial information among visual words. This information may be useful to capture discriminative visual-patterns in specific computer vision tasks. In order to overcome this problem we propose the use of visual n-grams. N-grams based-representations are very popular in the field of natural language processing (NLP), in particular within text mining and information retrieval. We propose building a codebook of n-grams and then representing images by histograms of visual n-grams. We evaluate our proposal in the challenging task of classifying histopathology images. The novelty of our proposal lies in the fact that we use n-grams as attributes for a classification model (together with visual-words, i.e., 1-grams). This is common practice within NLP, although, to the best of our knowledge, this idea has not been explored yet within computer vision. We report experimental results in a database of histopathology images where our proposed method outperforms the traditional BoVWs formulation.
ERIC Educational Resources Information Center
Damsa, Crina I.; Nerland, Monika
2016-01-01
The two case studies reported in this article contribute to a better understanding of how inquiry tasks and activities are employed as resourceful means for learning in higher professional education. An observation-based approach was used to explore characteristics of and challenges in students' participation in collaborative inquiry activities in…
Extending Talk on a Prescribed Discussion Topic in a Learner-Native Speaker eTandem Learning Task
ERIC Educational Resources Information Center
Black, Emily
2017-01-01
Opportunities for language learners to access authentic input and engage in consequential interactions with native speakers of their target language abound in this era of computer mediated communication. Synchronous audio/video calling software represents one opportunity to access such input and address the challenges of developing pragmatic and…
Enhancing Mobile Working Memory Training by Using Affective Feedback
ERIC Educational Resources Information Center
Schaaff, Kristina
2013-01-01
The objective of this paper is to propose a novel approach to enhance working memory (WM) training for mobile devices by using information about the arousal level of a person. By the example of an adaptive n-back task, we combine methodologies from different disciplines to tackle this challenge: mobile learning, affective computing and cognitive…
Web Exclusive--Is the Sky the Limit to Educational Improvement?
ERIC Educational Resources Information Center
Schleicher, Andreas
2012-01-01
Today, education systems need to enable people to become lifelong learners, to manage complex ways of thinking and complex ways of working that computers can't take over easily. The task for educators and policy makers is to ensure that countries rise to this challenge. High performing education systems like Finland's and Singapore's tend to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Performing a global barrier operation in a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joinedmore » the single local barrier.« less
Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.
Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo
2017-01-01
The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.
Dedovic, Katarina; Renwick, Robert; Mahani, Najmeh Khalili; Engert, Veronika; Lupien, Sonia J.; Pruessner, Jens C.
2005-01-01
Objective We developed a protocol for inducing moderate psychologic stress in a functional imaging setting and evaluated the effects of stress on physiology and brain activation. Methods The Montreal Imaging Stress Task (MIST), derived from the Trier Mental Challenge Test, consists of a series of computerized mental arithmetic challenges, along with social evaluative threat components that are built into the program or presented by the investigator. To allow the effects of stress and mental arithmetic to be investigated separately, the MIST has 3 test conditions (rest, control and experimental), which can be presented in either a block or an event-related design, for use with functional magnetic resonance imaging (fMRI) or positron emission tomography (PET). In the rest condition, subjects look at a static computer screen on which no tasks are shown. In the control condition, a series of mental arithmetic tasks are displayed on the computer screen, and subjects submit their answers by means of a response interface. In the experimental condition, the difficulty and time limit of the tasks are manipulated to be just beyond the individual's mental capacity. In addition, in this condition the presentation of the mental arithmetic tasks is supplemented by a display of information on individual and average performance, as well as expected performance. Upon completion of each task, the program presents a performance evaluation to further increase the social evaluative threat of the situation. Results In 2 independent studies using PET and a third independent study using fMRI, with a total of 42 subjects, levels of salivary free cortisol for the whole group were significantly increased under the experimental condition, relative to the control and rest conditions. Performing mental arithmetic was linked to activation of motor and visual association cortices, as well as brain structures involved in the performance of these tasks (e.g., the angular gyrus). Conclusions We propose the MIST as a tool for investigating the effects of perceiving and processing psychosocial stress in functional imaging studies. PMID:16151536
Challenges & Roadmap for Beyond CMOS Computing Simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, Arun F.; Frank, Michael P.
Simulating HPC systems is a difficult task and the emergence of “Beyond CMOS” architectures and execution models will increase that difficulty. This document presents a “tutorial” on some of the simulation challenges faced by conventional and non-conventional architectures (Section 1) and goals and requirements for simulating Beyond CMOS systems (Section 2). These provide background for proposed short- and long-term roadmaps for simulation efforts at Sandia (Sections 3 and 4). Additionally, a brief explanation of a proof-of-concept integration of a Beyond CMOS architectural simulator is presented (Section 2.3).
XPRESS: eXascale PRogramming Environment and System Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brightwell, Ron; Sterling, Thomas; Koniges, Alice
The XPRESS Project is one of four major projects of the DOE Office of Science Advanced Scientific Computing Research X-stack Program initiated in September, 2012. The purpose of XPRESS is to devise an innovative system software stack to enable practical and useful exascale computing around the end of the decade with near-term contributions to efficient and scalable operation of trans-Petaflops performance systems in the next two to three years; both for DOE mission-critical applications. To this end, XPRESS directly addresses critical challenges in computing of efficiency, scalability, and programmability through introspective methods of dynamic adaptive resource management and task scheduling.
Applications of intelligent computer-aided training
NASA Technical Reports Server (NTRS)
Loftin, R. B.; Savely, Robert T.
1991-01-01
Intelligent computer-aided training (ICAT) systems simulate the behavior of an experienced instructor observing a trainee, responding to help requests, diagnosing and remedying trainee errors, and proposing challenging new training scenarios. This paper presents a generic ICAT architecture that supports the efficient development of ICAT systems for varied tasks. In addition, details of ICAT projects, built with this architecture, that deliver specific training for Space Shuttle crew members, ground support personnel, and flight controllers are presented. Concurrently with the creation of specific ICAT applications, a general-purpose software development environment for ICAT systems is being built. The widespread use of such systems for both ground-based and on-orbit training will serve to preserve task and training expertise, support the training of large numbers of personnel in a distributed manner, and ensure the uniformity and verifiability of training experiences.
Gerjets, Peter; Walter, Carina; Rosenstiel, Wolfgang; Bogdan, Martin; Zander, Thorsten O.
2014-01-01
According to Cognitive Load Theory (CLT), one of the crucial factors for successful learning is the type and amount of working-memory load (WML) learners experience while studying instructional materials. Optimal learning conditions are characterized by providing challenges for learners without inducing cognitive over- or underload. Thus, presenting instruction in a way that WML is constantly held within an optimal range with regard to learners' working-memory capacity might be a good method to provide these optimal conditions. The current paper elaborates how digital learning environments, which achieve this goal can be developed by combining approaches from Cognitive Psychology, Neuroscience, and Computer Science. One of the biggest obstacles that needs to be overcome is the lack of an unobtrusive method of continuously assessing learners' WML in real-time. We propose to solve this problem by applying passive Brain-Computer Interface (BCI) approaches to realistic learning scenarios in digital environments. In this paper we discuss the methodological and theoretical prospects and pitfalls of this approach based on results from the literature and from our own research. We present a strategy on how several inherent challenges of applying BCIs to WML and learning can be met by refining the psychological constructs behind WML, by exploring their neural signatures, by using these insights for sophisticated task designs, and by optimizing algorithms for analyzing electroencephalography (EEG) data. Based on this strategy we applied machine-learning algorithms for cross-task classifications of different levels of WML to tasks that involve studying realistic instructional materials. We obtained very promising results that yield several recommendations for future work. PMID:25538544
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.
Orchard, Garrick; Jayawant, Ajinkya; Cohen, Gregory K; Thakor, Nitish
2015-01-01
Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.
Toward integration of in vivo molecular computing devices: successes and challenges
Hayat, Sikander; Hinze, Thomas
2008-01-01
The computing power unleashed by biomolecule based massively parallel computational units has been the focus of many interdisciplinary studies that couple state of the art ideas from mathematical logic, theoretical computer science, bioengineering, and nanotechnology to fulfill some computational task. The output can influence, for instance, release of a drug at a specific target, gene expression, cell population, or be a purely mathematical entity. Analysis of the results of several studies has led to the emergence of a general set of rules concerning the implementation and optimization of in vivo computational units. Taking two recent studies on in vivo computing as examples, we discuss the impact of mathematical modeling and simulation in the field of synthetic biology and on in vivo computing. The impact of the emergence of gene regulatory networks and the potential of proteins acting as “circuit wires” on the problem of interconnecting molecular computing device subunits is also highlighted. PMID:19404433
Towards Modeling False Memory With Computational Knowledge Bases.
Li, Justin; Kohanyi, Emma
2017-01-01
One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling. Copyright © 2016 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Zhou, Jianfeng; Xu, Benda; Peng, Chuan; Yang, Yang; Huo, Zhuoxi
2015-08-01
AIRE-Linux is a dedicated Linux system for astronomers. Modern astronomy faces two big challenges: massive observed raw data which covers the whole electromagnetic spectrum, and overmuch professional data processing skill which exceeds personal or even a small team's abilities. AIRE-Linux, which is a specially designed Linux and will be distributed to users by Virtual Machine (VM) images in Open Virtualization Format (OVF), is to help astronomers confront the challenges. Most astronomical software packages, such as IRAF, MIDAS, CASA, Heasoft etc., will be integrated into AIRE-Linux. It is easy for astronomers to configure and customize the system and use what they just need. When incorporated into cloud computing platforms, AIRE-Linux will be able to handle data intensive and computing consuming tasks for astronomers. Currently, a Beta version of AIRE-Linux is ready for download and testing.
ERIC Educational Resources Information Center
Olsson, Jan
2018-01-01
This study investigates how students' reasoning contributes to their utilization of computer-generated feedback. Sixteen 16-year-old students solved a linear function task designed to present a challenge to them using dynamic software, GeoGebra, for assistance. The data were analysed with respect both to character of reasoning and to the use of…
Multicore job scheduling in the Worldwide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Forti, A.; Pérez-Calero Yzquierdo, A.; Hartmann, T.; Alef, M.; Lahiff, A.; Templon, J.; Dal Pra, S.; Gila, M.; Skipsey, S.; Acosta-Silva, C.; Filipcic, A.; Walker, R.; Walker, C. J.; Traynor, D.; Gadrat, S.
2015-12-01
After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.
Deep Learning for Computer Vision: A Brief Review
Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios
2018-01-01
Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619
An element search ant colony technique for solving virtual machine placement problem
NASA Astrophysics Data System (ADS)
Srija, J.; Rani John, Rose; Kanaga, Grace Mary, Dr.
2017-09-01
The data centres in the cloud environment play a key role in providing infrastructure for ubiquitous computing, pervasive computing, mobile computing etc. This computing technique tries to utilize the available resources in order to provide services. Hence maintaining the resource utilization without wastage of power consumption has become a challenging task for the researchers. In this paper we propose the direct guidance ant colony system for effective mapping of virtual machines to the physical machine with maximal resource utilization and minimal power consumption. The proposed algorithm has been compared with the existing ant colony approach which is involved in solving virtual machine placement problem and thus the proposed algorithm proves to provide better result than the existing technique.
NASA Astrophysics Data System (ADS)
Ando, K.; Fujita, S.; Ito, J.; Yuasa, S.; Suzuki, Y.; Nakatani, Y.; Miyazaki, T.; Yoda, H.
2014-05-01
Most parts of present computer systems are made of volatile devices, and the power to supply them to avoid information loss causes huge energy losses. We can eliminate this meaningless energy loss by utilizing the non-volatile function of advanced spin-transfer torque magnetoresistive random-access memory (STT-MRAM) technology and create a new type of computer, i.e., normally off computers. Critical tasks to achieve normally off computers are implementations of STT-MRAM technologies in the main memory and low-level cache memories. STT-MRAM technology for applications to the main memory has been successfully developed by using perpendicular STT-MRAMs, and faster STT-MRAM technologies for applications to the cache memory are now being developed. The present status of STT-MRAMs and challenges that remain for normally off computers are discussed.
Scholl, Jacqueline; Klein-Flügge, Miriam
2017-09-28
Recent research in cognitive neuroscience has begun to uncover the processes underlying increasingly complex voluntary behaviours, including learning and decision-making. Partly this success has been possible by progressing from simple experimental tasks to paradigms that incorporate more ecological features. More specifically, the premise is that to understand cognitions and brain functions relevant for real life, we need to introduce some of the ecological challenges that we have evolved to solve. This often entails an increase in task complexity, which can be managed by using computational models to help parse complex behaviours into specific component mechanisms. Here we propose that using computational models with tasks that capture ecologically relevant learning and decision-making processes may provide a critical advantage for capturing the mechanisms underlying symptoms of disorders in psychiatry. As a result, it may help develop mechanistic approaches towards diagnosis and treatment. We begin this review by mapping out the basic concepts and models of learning and decision-making. We then move on to consider specific challenges that emerge in realistic environments and describe how they can be captured by tasks. These include changes of context, uncertainty, reflexive/emotional biases, cost-benefit decision-making, and balancing exploration and exploitation. Where appropriate we highlight future or current links to psychiatry. We particularly draw examples from research on clinical depression, a disorder that greatly compromises motivated behaviours in real-life, but where simpler paradigms have yielded mixed results. Finally, we highlight several paradigms that could be used to help provide new insights into the mechanisms of psychiatric disorders. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
A Blueprint for Demonstrating Quantum Supremacy with Superconducting Qubits
NASA Technical Reports Server (NTRS)
Kechedzhi, Kostyantyn
2018-01-01
Long coherence times and high fidelity control recently achieved in scalable superconducting circuits paved the way for the growing number of experimental studies of many-qubit quantum coherent phenomena in these devices. Albeit full implementation of quantum error correction and fault tolerant quantum computation remains a challenge the near term pre-error correction devices could allow new fundamental experiments despite inevitable accumulation of errors. One such open question foundational for quantum computing is achieving the so called quantum supremacy, an experimental demonstration of a computational task that takes polynomial time on the quantum computer whereas the best classical algorithm would require exponential time and/or resources. It is possible to formulate such a task for a quantum computer consisting of less than a 100 qubits. The computational task we consider is to provide approximate samples from a non-trivial quantum distribution. This is a generalization for the case of superconducting circuits of ideas behind boson sampling protocol for quantum optics introduced by Arkhipov and Aaronson. In this presentation we discuss a proof-of-principle demonstration of such a sampling task on a 9-qubit chain of superconducting gmon qubits developed by Google. We discuss theoretical analysis of the driven evolution of the device resulting in output approximating samples from a uniform distribution in the Hilbert space, a quantum chaotic state. We analyze quantum chaotic characteristics of the output of the circuit and the time required to generate a sufficiently complex quantum distribution. We demonstrate that the classical simulation of the sampling output requires exponential resources by connecting the task of calculating the output amplitudes to the sign problem of the Quantum Monte Carlo method. We also discuss the detailed theoretical modeling required to achieve high fidelity control and calibration of the multi-qubit unitary evolution in the device. We use a novel cross-entropy statistical metric as a figure of merit to verify the output and calibrate the device controls. Finally, we demonstrate the statistics of the wave function amplitudes generated on the 9-gmon chain and verify the quantum chaotic nature of the generated quantum distribution. This verifies the implementation of the quantum supremacy protocol.
Model-Based Systems Engineering Approach to Managing Mass Margin
NASA Technical Reports Server (NTRS)
Chung, Seung H.; Bayer, Todd J.; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Christopher; Lam, Doris
2012-01-01
When designing a flight system from concept through implementation, one of the fundamental systems engineering tasks ismanaging the mass margin and a mass equipment list (MEL) of the flight system. While generating a MEL and computing a mass margin is conceptually a trivial task, maintaining consistent and correct MELs and mass margins can be challenging due to the current practices of maintaining duplicate information in various forms, such as diagrams and tables, and in various media, such as files and emails. We have overcome this challenge through a model-based systems engineering (MBSE) approach within which we allow only a single-source-of-truth. In this paper we describe the modeling patternsused to capture the single-source-of-truth and the views that have been developed for the Europa Habitability Mission (EHM) project, a mission concept study, at the Jet Propulsion Laboratory (JPL).
Integration of prior knowledge into dense image matching for video surveillance
NASA Astrophysics Data System (ADS)
Menze, M.; Heipke, C.
2014-08-01
Three-dimensional information from dense image matching is a valuable input for a broad range of vision applications. While reliable approaches exist for dedicated stereo setups they do not easily generalize to more challenging camera configurations. In the context of video surveillance the typically large spatial extent of the region of interest and repetitive structures in the scene render the application of dense image matching a challenging task. In this paper we present an approach that derives strong prior knowledge from a planar approximation of the scene. This information is integrated into a graph-cut based image matching framework that treats the assignment of optimal disparity values as a labelling task. Introducing the planar prior heavily reduces ambiguities together with the search space and increases computational efficiency. The results provide a proof of concept of the proposed approach. It allows the reconstruction of dense point clouds in more general surveillance camera setups with wider stereo baselines.
Visual Activities for Assessing Non-science Majors’ Understanding in Introductory Astronomy
NASA Astrophysics Data System (ADS)
Loranz, Daniel; Prather, E. E.; Slater, T. F.
2006-12-01
One of the most ardent challenges for astronomy teachers is to deeply and meaningfully assess students’ conceptual and quantitative understanding of astronomy topics. In an effort to uncover students’ actual understanding, members and affiliates of the Conceptual Astronomy and Physics Education Research (CAPER) Team at the University of Arizona and Truckee Meadows Community College are creating and field-testing innovative approaches to assessment. Leveraging from the highly successful work on interactive lecture demonstrations from astronomy and physics education research, we are creating a series of conceptually rich questions that are matched to visually captivating and purposefully interactive astronomical animations. These conceptually challenging tasks are being created to span the entire domain of topics in introductory astronomy for non-science majoring undergraduates. When completed, these sorting tasks and vocabulary-in-context activities will be able to be delivered via a drag-and-drop computer interface.
NASA Astrophysics Data System (ADS)
Swetnam, T. L.; Pelletier, J. D.; Merchant, N.; Callahan, N.; Lyons, E.
2015-12-01
Earth science is making rapid advances through effective utilization of large-scale data repositories such as aerial LiDAR and access to NSF-funded cyberinfrastructures (e.g. the OpenTopography.org data portal, iPlant Collaborative, and XSEDE). Scaling analysis tasks that are traditionally developed using desktops, laptops or computing clusters to effectively leverage national and regional scale cyberinfrastructure pose unique challenges and barriers to adoption. To address some of these challenges in Fall 2014 an 'Applied Cyberinfrastructure Concepts' a project-based learning course (ISTA 420/520) at the University of Arizona focused on developing scalable models of 'Effective Energy and Mass Transfer' (EEMT, MJ m-2 yr-1) for use by the NSF Critical Zone Observatories (CZO) project. EEMT is a quantitative measure of the flux of available energy to the critical zone, and its computation involves inputs that have broad applicability (e.g. solar insolation). The course comprised of 25 students with varying level of computational skills and with no prior domain background in the geosciences, collaborated with domain experts to develop the scalable workflow. The original workflow relying on open-source QGIS platform on a laptop was scaled to effectively utilize cloud environments (Openstack), UA Campus HPC systems, iRODS, and other XSEDE and OSG resources. The project utilizes public data, e.g. DEMs produced by OpenTopography.org and climate data from Daymet, which are processed using GDAL, GRASS and SAGA and the Makeflow and Work-queue task management software packages. Students were placed into collaborative groups to develop the separate aspects of the project. They were allowed to change teams, alter workflows, and design and develop novel code. The students were able to identify all necessary dependencies, recompile source onto the target execution platforms, and demonstrate a functional workflow, which was further improved upon by one of the group leaders over Spring 2015. All of the code, documentation and workflow description are currently available on GitHub and a public data portal is in development. We present a case study of how students reacted to the challenge of a real science problem, their interactions with end-users, what went right, and what could be done better in the future.
Cognitive Modeling of Individual Variation in Reference Production and Comprehension
Hendriks, Petra
2016-01-01
A challenge for most theoretical and computational accounts of linguistic reference is the observation that language users vary considerably in their referential choices. Part of the variation observed among and within language users and across tasks may be explained from variation in the cognitive resources available to speakers and listeners. This paper presents a computational model of reference production and comprehension developed within the cognitive architecture ACT-R. Through simulations with this ACT-R model, it is investigated how cognitive constraints interact with linguistic constraints and features of the linguistic discourse in speakers’ production and listeners’ comprehension of referring expressions in specific tasks, and how this interaction may give rise to variation in referential choice. The ACT-R model of reference explains and predicts variation among language users in their referential choices as a result of individual and task-related differences in processing speed and working memory capacity. Because of limitations in their cognitive capacities, speakers sometimes underspecify or overspecify their referring expressions, and listeners sometimes choose incorrect referents or are overly liberal in their interpretation of referring expressions. PMID:27092101
Predicting protein structures with a multiplayer online game.
Cooper, Seth; Khatib, Firas; Treuille, Adrien; Barbero, Janos; Lee, Jeehyung; Beenen, Michael; Leaver-Fay, Andrew; Baker, David; Popović, Zoran; Players, Foldit
2010-08-05
People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully 'crowd-sourced' through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.
Mesoscale Models of Fluid Dynamics
NASA Astrophysics Data System (ADS)
Boghosian, Bruce M.; Hadjiconstantinou, Nicolas G.
During the last half century, enormous progress has been made in the field of computational materials modeling, to the extent that in many cases computational approaches are used in a predictive fashion. Despite this progress, modeling of general hydrodynamic behavior remains a challenging task. One of the main challenges stems from the fact that hydrodynamics manifests itself over a very wide range of length and time scales. On one end of the spectrum, one finds the fluid's "internal" scale characteristic of its molecular structure (in the absence of quantum effects, which we omit in this chapter). On the other end, the "outer" scale is set by the characteristic sizes of the problem's domain. The resulting scale separation or lack thereof as well as the existence of intermediate scales are key to determining the optimal approach. Successful treatments require a judicious choice of the level of description which is a delicate balancing act between the conflicting requirements of fidelity and manageable computational cost: a coarse description typically requires models for underlying processes occuring at smaller length and time scales; on the other hand, a fine-scale model will incur a significantly larger computational cost.
Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser
NASA Astrophysics Data System (ADS)
Christen, M.
2016-06-01
Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.
ERIC Educational Resources Information Center
Long, Ju
2007-01-01
Improving learning effectiveness has always been a constant challenge in software education and training. One of the primary tasks educators face is to motivate learners to perform to their best abilities. Using computer games is one means to encourage learners to learn (Klawe, 1994). When games are used in general education, they could enhance…
Overview of the gene ontology task at BioCreative IV.
Mao, Yuqing; Van Auken, Kimberly; Li, Donghui; Arighi, Cecilia N; McQuilton, Peter; Hayman, G Thomas; Tweedie, Susan; Schaeffer, Mary L; Laulederkind, Stanley J F; Wang, Shur-Jen; Gobeill, Julien; Ruch, Patrick; Luu, Anh Tuan; Kim, Jung-Jae; Chiang, Jung-Hsien; Chen, Yu-De; Yang, Chia-Jung; Liu, Hongfang; Zhu, Dongqing; Li, Yanpeng; Yu, Hong; Emadzadeh, Ehsan; Gonzalez, Graciela; Chen, Jian-Ming; Dai, Hong-Jie; Lu, Zhiyong
2014-01-01
Gene ontology (GO) annotation is a common task among model organism databases (MODs) for capturing gene function data from journal articles. It is a time-consuming and labor-intensive task, and is thus often considered as one of the bottlenecks in literature curation. There is a growing need for semiautomated or fully automated GO curation techniques that will help database curators to rapidly and accurately identify gene function information in full-length articles. Despite multiple attempts in the past, few studies have proven to be useful with regard to assisting real-world GO curation. The shortage of sentence-level training data and opportunities for interaction between text-mining developers and GO curators has limited the advances in algorithm development and corresponding use in practical circumstances. To this end, we organized a text-mining challenge task for literature-based GO annotation in BioCreative IV. More specifically, we developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task). With the support from five MODs, we provided teams with >4000 unique text passages that served as the basis for each GO annotation in our task data. Such evidence text information has long been recognized as critical for text-mining algorithm development but was never made available because of the high cost of curation. In total, seven teams participated in the challenge task. From the team results, we conclude that the state of the art in automatically mining GO terms from literature has improved over the past decade while much progress is still needed for computer-assisted GO curation. Future work should focus on addressing remaining technical challenges for improved performance of automatic GO concept recognition and incorporating practical benefits of text-mining tools into real-world GO annotation. http://www.biocreative.org/tasks/biocreative-iv/track-4-GO/. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
Iterative Importance Sampling Algorithms for Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W; Morzfeld, Matthias; Day, Marcus S.
In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is a challenging task. Several sampling algorithms have been proposed over the past years that take an iterative approach to constructing a proposal distribution. We investigate the applicabilitymore » of such algorithms by applying them to two realistic and challenging test problems, one in subsurface flow, and one in combustion modeling. More specifically, we implement importance sampling algorithms that iterate over the mean and covariance matrix of Gaussian or multivariate t-proposal distributions. Our implementation leverages massively parallel computers, and we present strategies to initialize the iterations using 'coarse' MCMC runs or Gaussian mixture models.« less
Efficient Mining of Interesting Patterns in Large Biological Sequences
Rashid, Md. Mamunur; Karim, Md. Rezaul; Jeong, Byeong-Soo
2012-01-01
Pattern discovery in biological sequences (e.g., DNA sequences) is one of the most challenging tasks in computational biology and bioinformatics. So far, in most approaches, the number of occurrences is a major measure of determining whether a pattern is interesting or not. In computational biology, however, a pattern that is not frequent may still be considered very informative if its actual support frequency exceeds the prior expectation by a large margin. In this paper, we propose a new interesting measure that can provide meaningful biological information. We also propose an efficient index-based method for mining such interesting patterns. Experimental results show that our approach can find interesting patterns within an acceptable computation time. PMID:23105928
Efficient mining of interesting patterns in large biological sequences.
Rashid, Md Mamunur; Karim, Md Rezaul; Jeong, Byeong-Soo; Choi, Ho-Jin
2012-03-01
Pattern discovery in biological sequences (e.g., DNA sequences) is one of the most challenging tasks in computational biology and bioinformatics. So far, in most approaches, the number of occurrences is a major measure of determining whether a pattern is interesting or not. In computational biology, however, a pattern that is not frequent may still be considered very informative if its actual support frequency exceeds the prior expectation by a large margin. In this paper, we propose a new interesting measure that can provide meaningful biological information. We also propose an efficient index-based method for mining such interesting patterns. Experimental results show that our approach can find interesting patterns within an acceptable computation time.
Using a MaxEnt Classifier for the Automatic Content Scoring of Free-Text Responses
NASA Astrophysics Data System (ADS)
Sukkarieh, Jana Z.
2011-03-01
Criticisms against multiple-choice item assessments in the USA have prompted researchers and organizations to move towards constructed-response (free-text) items. Constructed-response (CR) items pose many challenges to the education community—one of which is that they are expensive to score by humans. At the same time, there has been widespread movement towards computer-based assessment and hence, assessment organizations are competing to develop automatic content scoring engines for such items types—which we view as a textual entailment task. This paper describes how MaxEnt Modeling is used to help solve the task. MaxEnt has been used in many natural language tasks but this is the first application of the MaxEnt approach to textual entailment and automatic content scoring.
Networked Virtual Organizations: A Chance for Small and Medium Sized Enterprises on Global Markets
NASA Astrophysics Data System (ADS)
Cellary, Wojciech
Networked Virtual Organizations (NVOs) are a right answer to challenges of globalized, diversified, and dynamic contemporary economy. NVOs need more than e-trade and outsourcing, namely, they need out-tasking and e-collaboration. To out-task, but retain control on the way a task is performed by an external partner, two integrations are required: (1) integration of computer management systems of enterprises cooperating within an NVO; and (2) integration of cooperating representatives of NVO member enterprises into a virtual team. NVOs provide a particular chance to Small and Medium size Enterprises (SMEs) to find their place on global markets and to play a significant role on them. Requirements for SMEs to be able to successfully join an NVO are analyzed in the paper.
DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.
Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei
2017-07-18
Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.
Sofer, Imri; Crouzet, Sébastien M.; Serre, Thomas
2015-01-01
Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability. PMID:26335683
Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini
2016-12-01
Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.
Learning-based stochastic object models for use in optimizing imaging systems
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua
2017-03-01
It is widely known that the optimization of imaging systems based on objective, or task-based, measures of image quality via computer-simulation requires use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in anatomy within a specified ensemble of patients remains a challenging task. Because they are established by use of image data corresponding a single patient, previously reported numerical anatomical models lack of the ability to accurately model inter- patient variations in anatomy. In certain applications, however, databases of high-quality volumetric images are available that can facilitate this task. In this work, a novel and tractable methodology for learning a SOM from a set of volumetric training images is developed. The proposed method is based upon geometric attribute distribution (GAD) models, which characterize the inter-structural centroid variations and the intra-structural shape variations of each individual anatomical structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations learned from training data. By use of the GAD models, random organ shapes and positions can be generated and integrated to form an anatomical phantom. The randomness in organ shape and position will reflect the variability of anatomy present in the training data. To demonstrate the methodology, a SOM corresponding to the pelvis of an adult male was computed and a corresponding ensemble of phantoms was created. Additionally, computer-simulated X-ray projection images corresponding to the phantoms were computed, from which tomographic images were reconstructed.
A systematic mapping study of process mining
NASA Astrophysics Data System (ADS)
Maita, Ana Rocío Cárdenas; Martins, Lucas Corrêa; López Paz, Carlos Ramón; Rafferty, Laura; Hung, Patrick C. K.; Peres, Sarajane Marques; Fantinato, Marcelo
2018-05-01
This study systematically assesses the process mining scenario from 2005 to 2014. The analysis of 705 papers evidenced 'discovery' (71%) as the main type of process mining addressed and 'categorical prediction' (25%) as the main mining task solved. The most applied traditional technique is the 'graph structure-based' ones (38%). Specifically concerning computational intelligence and machine learning techniques, we concluded that little relevance has been given to them. The most applied are 'evolutionary computation' (9%) and 'decision tree' (6%), respectively. Process mining challenges, such as balancing among robustness, simplicity, accuracy and generalization, could benefit from a larger use of such techniques.
Learning from Failures: Archiving and Designing with Failure and Risk
NASA Technical Reports Server (NTRS)
VanWie, Michael; Bohm, Matt; Barrientos, Francesca; Turner, Irem; Stone, Robert
2005-01-01
Identifying and mitigating risks during conceptual design remains an ongoing challenge. This work presents the results of collaborative efforts between The University of Missouri-Rolla and NASA Ames Research Center to examine how an early stage mission design team at NASA addresses risk, and, how a computational support tool can assist these designers in their tasks. Results of our observations are given in addition to a brief example of our implementation of a repository based computational tool that allows users to browse and search through archived failure and risk data as related to either physical artifacts or functionality.
Qin, Zhongyuan; Zhang, Xinshuai; Feng, Kerong; Zhang, Qunfang; Huang, Jie
2014-01-01
With the rapid development and widespread adoption of wireless sensor networks (WSNs), security has become an increasingly prominent problem. How to establish a session key in node communication is a challenging task for WSNs. Considering the limitations in WSNs, such as low computing capacity, small memory, power supply limitations and price, we propose an efficient identity-based key management (IBKM) scheme, which exploits the Bloom filter to authenticate the communication sensor node with storage efficiency. The security analysis shows that IBKM can prevent several attacks effectively with acceptable computation and communication overhead. PMID:25264955
A computer vision for animal ecology.
Weinstein, Ben G
2018-05-01
A central goal of animal ecology is to observe species in the natural world. The cost and challenge of data collection often limit the breadth and scope of ecological study. Ecologists often use image capture to bolster data collection in time and space. However, the ability to process these images remains a bottleneck. Computer vision can greatly increase the efficiency, repeatability and accuracy of image review. Computer vision uses image features, such as colour, shape and texture to infer image content. I provide a brief primer on ecological computer vision to outline its goals, tools and applications to animal ecology. I reviewed 187 existing applications of computer vision and divided articles into ecological description, counting and identity tasks. I discuss recommendations for enhancing the collaboration between ecologists and computer scientists and highlight areas for future growth of automated image analysis. © 2017 The Author. Journal of Animal Ecology © 2017 British Ecological Society.
TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling
NASA Astrophysics Data System (ADS)
Nelson, J.; Jones, N.; Ames, D. P.
2015-12-01
Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.
Mapping university students' epistemic framing of computational physics using network analysis
NASA Astrophysics Data System (ADS)
Bodin, Madelen
2012-06-01
Solving physics problem in university physics education using a computational approach requires knowledge and skills in several domains, for example, physics, mathematics, programming, and modeling. These competences are in turn related to students’ beliefs about the domains as well as about learning. These knowledge and beliefs components are referred to here as epistemic elements, which together represent the students’ epistemic framing of the situation. The purpose of this study was to investigate university physics students’ epistemic framing when solving and visualizing a physics problem using a particle-spring model system. Students’ epistemic framings are analyzed before and after the task using a network analysis approach on interview transcripts, producing visual representations as epistemic networks. The results show that students change their epistemic framing from a modeling task, with expectancies about learning programming, to a physics task, in which they are challenged to use physics principles and conservation laws in order to troubleshoot and understand their simulations. This implies that the task, even though it is not introducing any new physics, helps the students to develop a more coherent view of the importance of using physics principles in problem solving. The network analysis method used in this study is shown to give intelligible representations of the students’ epistemic framing and is proposed as a useful method of analysis of textual data.
Computing, Information and Communications Technology (CICT) Website
NASA Technical Reports Server (NTRS)
Hardman, John; Tu, Eugene (Technical Monitor)
2002-01-01
The Computing, Information and Communications Technology Program (CICT) was established in 2001 to ensure NASA's Continuing leadership in emerging technologies. It is a coordinated, Agency-wide effort to develop and deploy key enabling technologies for a broad range of mission-critical tasks. The NASA CICT program is designed to address Agency-specific computing, information, and communications technology requirements beyond the projected capabilities of commercially available solutions. The areas of technical focus have been chosen for their impact on NASA's missions, their national importance, and the technical challenge they provide to the Program. In order to meet its objectives, the CICT Program is organized into the following four technology focused projects: 1) Computing, Networking and Information Systems (CNIS); 2) Intelligent Systems (IS); 3) Space Communications (SC); 4) Information Technology Strategic Research (ITSR).
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network
NASA Astrophysics Data System (ADS)
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-03-01
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network.
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-03-21
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices' non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-01-01
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing. PMID:28322262
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Bringing computational models of bone regeneration to the clinic.
Carlier, Aurélie; Geris, Liesbet; Lammens, Johan; Van Oosterwyck, Hans
2015-01-01
Although the field of bone regeneration has experienced great advancements in the last decades, integrating all the relevant, patient-specific information into a personalized diagnosis and optimal treatment remains a challenging task due to the large number of variables that affect bone regeneration. Computational models have the potential to cope with this complexity and to improve the fundamental understanding of the bone regeneration processes as well as to predict and optimize the patient-specific treatment strategies. However, the current use of computational models in daily orthopedic practice is very limited or inexistent. We have identified three key hurdles that limit the translation of computational models of bone regeneration from bench to bed side. First, there exists a clear mismatch between the scope of the existing and the clinically required models. Second, most computational models are confronted with limited quantitative information of insufficient quality thereby hampering the determination of patient-specific parameter values. Third, current computational models are only corroborated with animal models, whereas a thorough (retrospective and prospective) assessment of the computational model will be crucial to convince the health care providers of the capabilities thereof. These challenges must be addressed so that computational models of bone regeneration can reach their true potential, resulting in the advancement of individualized care and reduction of the associated health care costs. © 2015 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Korallo, Liliya; Foreman, Nigel; Boyd-Davis, Stephen; Moar, Magnus; Coulson, Mark
2012-01-01
Studies examined the potential use of VEs in teaching historical chronology to 127 children of primary school age (8-9 years). The use of passive fly-through VEs had been found, in an earlier study, to be disadvantageous with this age group when tested for their subsequent ability to place displayed sequential events in correct chronological…
NASA Astrophysics Data System (ADS)
Russo, James; Hopkins, Sarah
2017-09-01
The current study considered young students' (7 and 8 years old) experiences and perceptions of mathematics lessons involving challenging (i.e. cognitively demanding) tasks. We used the Constant Comparative Method to analyse the interview responses ( n = 73) regarding what work artefacts students were most proud of creating and why. Five themes emerged that characterised student reflections: enjoyment, effort, learning, productivity and meaningful mathematics. Overall, there was evidence that students embraced struggle and persisted when engaged in mathematics lessons involving challenging tasks and, moreover, that many students enjoyed the process of being challenged. In the second section of the paper, the lesson structure preferences of a subset of participants ( n = 23) when learning with challenging tasks are considered. Overall, more students preferred the teach-first lesson structure to the task-first lesson structure, primarily because it activated their cognition to prepare them for work on the challenging task. However, a substantial minority of students (42 %) instead endorsed the task-first lesson structure, with several students explaining they preferred this structure precisely because it was so cognitively demanding. Other reasons for preferring the task-first structure included that it allowed the focus of the lesson to be on the challenging task and the subsequent discussion of student work. A key implication of these combined findings is that, for many students, work on challenging tasks appeared to remain cognitively demanding irrespective of the structure of the lesson.
NASA Astrophysics Data System (ADS)
Sapra, Karan; Gupta, Saurabh; Atchley, Scott; Anantharaj, Valentine; Miller, Ross; Vazhkudai, Sudharshan
2016-04-01
Efficient resource utilization is critical for improved end-to-end computing and workflow of scientific applications. Heterogeneous node architectures, such as the GPU-enabled Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), present us with further challenges. In many HPC applications on Titan, the accelerators are the primary compute engines while the CPUs orchestrate the offloading of work onto the accelerators, and moving the output back to the main memory. On the other hand, applications that do not exploit GPUs, the CPU usage is dominant while the GPUs idle. We utilized Heterogenous Functional Partitioning (HFP) runtime framework that can optimize usage of resources on a compute node to expedite an application's end-to-end workflow. This approach is different from existing techniques for in-situ analyses in that it provides a framework for on-the-fly analysis on-node by dynamically exploiting under-utilized resources therein. We have implemented in the Community Earth System Model (CESM) a new concurrent diagnostic processing capability enabled by the HFP framework. Various single variate statistics, such as means and distributions, are computed in-situ by launching HFP tasks on the GPU via the node local HFP daemon. Since our current configuration of CESM does not use GPU resources heavily, we can move these tasks to GPU using the HFP framework. Each rank running the atmospheric model in CESM pushes the variables of of interest via HFP function calls to the HFP daemon. This node local daemon is responsible for receiving the data from main program and launching the designated analytics tasks on the GPU. We have implemented these analytics tasks in C and use OpenACC directives to enable GPU acceleration. This methodology is also advantageous while executing GPU-enabled configurations of CESM when the CPUs will be idle during portions of the runtime. In our implementation results, we demonstrate that it is more efficient to use HFP framework to offload the tasks to GPUs instead of doing it in the main application. We observe increased resource utilization and overall productivity in this approach by using HFP framework for end-to-end workflow.
ERIC Educational Resources Information Center
Russo, James; Hopkins, Sarah
2017-01-01
This paper outlines a seven-step process for developing problem-solving tasks informed by cognitive load theory. Through an example of a task developed for Year 2 students, we show how this approach can be used to produce challenging mathematical tasks that aim to optimise cognitive load for each student.
People with chronic low back pain have poorer balance than controls in challenging tasks.
da Silva, Rubens A; Vieira, Edgar R; Fernandes, Karen B P; Andraus, Rodrigo A; Oliveira, Marcio R; Sturion, Leandro A; Calderon, Mariane G
2018-06-01
To compare the balance of individuals with and without chronic low back pain during five tasks. The participants were 20 volunteers, 10 with and 10 without nonspecific chronic low back pain, mean age 34 years, 50% females. The participants completed the following balance tasks on a force platform in random order: (1) two-legged stance with eyes open, (2) two-legged stance with eyes closed, (3) semi-tandem with eyes open, (4) semi-tandem with eyes closed and (5) one-legged stance with eyes open. The participants completed three 60-s trials of tasks 1-4, and three 30-s trials of task 5 with 30-s rests between trials. The center of pressure area, velocity and frequency in the antero-posterior and medio-lateral directions were computed during each task, and compared between groups and tasks. Participants with chronic low back pain presented significantly larger center of pressure area and higher velocity than the healthy controls (p < 0.001). There were significant differences among tasks for all center of pressure variables (p < 0.001). Semi-tandem (tasks 3 and 4) and one-leg stance (task 5) were more sensitive to identify balance impairments in the chronic low back pain group than two-legged stance tasks 1 and 2 (effect size >1.37 vs. effect size <0.64). There were no significant interactions between groups and tasks. Individuals with chronic low back pain presented poorer postural control using center of pressure measurements than the healthy controls, mainly during more challenging balance tasks such as semi-tandem and one-legged stance conditions. Implications for Rehabilitation People with chronic low back had poorer balance than those without it. Balance tasks need to be sensitive to capture impairments. Balance assessments during semi-tandem and one-legged stance were the most sensitive tasks to determine postural control deficit in people with chronic low back. Balance assessment should be included during rehabilitation programs for individuals with chronic low back pain for better clinical decision making related to balance re-training as necessary.
A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Z.; Hodgson, M.; Li, W.
2016-12-01
Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.
ERIC Educational Resources Information Center
Perkins, Karen
2016-01-01
The topics of decimals and polygons were taught to two classes by using challenging tasks, rather than the more conventional textbook approach. Students were given a pre-test and a post-test. A comparison between the two classes on the pre- and post-test was made. Prior to teaching through challenging tasks, students were surveyed about their…
A FPGA-based architecture for real-time image matching
NASA Astrophysics Data System (ADS)
Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo
2013-10-01
Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.
Sex differences on a computerized mental rotation task disappear with computer familiarization.
Roberts, J E; Bell, M A
2000-12-01
The area of cognitive research that has produced the most consistent sex differences is spatial ability. Particularly, men consistently perform better on mental rotation tasks than do women. This study examined the effects of familiarization with a computer on performance of a computerized two-dimensional mental rotation task. Two groups of college students (N=44) performed the rotation task, with one group performing a color-matching task that allowed them to be familiarized with the computer prior to the rotation task. Among the participants who only performed the rotation task, the 11 men performed better than the 11 women. Among the participants who performed the computer familiarization task before the rotation task, how ever, there were no sex differences on the mental rotation task between the 10 men and 12 women. These data indicate that sex differences on this two-dimensional task may reflect familiarization with the computer, not the mental rotation component of the task. Further research with larger samples and increased range of task difficulty is encouraged.
Neurobiomimetic constructs for intelligent unmanned systems and robotics
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Shah, Danelle C.; DeAngelus, Marianne A.
2014-06-01
This paper discusses a paradigm we refer to as neurobiomimetic, which involves emulations of brain neuroanatomy and neurobiology aspects and processes. Neurobiomimetic constructs include rudimentary and down-scaled computational representations of brain regions, sub-regions, and synaptic connectivity. Many different instances of neurobiomimetic constructs are possible, depending on various aspects such as the initial conditions of synaptic connectivity, number of neuron elements in regions, connectivity specifics, and more, and we refer to these instances as `animats'. While downscaled for computational feasibility, the animats are very large constructs; the animats implemented in this work contain over 47,000 neuron elements and over 720,000 synaptic connections. The paper outlines aspects of the animats implemented, spatial memory and learning cognitive task, the virtual-reality environment constructed to study the animat performing that task, and discussion of results. In a broad sense, we argue that the neurobiomimetic paradigm pursued in this work constitutes a particularly promising path to artificial cognition and intelligent unmanned systems. Biological brains readily cope with challenges of real-life tasks that consistently prove beyond even the most sophisticated algorithmic approaches known. At the cross-over point of neuroscience, cognitive science and computer science, paradigms such as the one pursued in this work aim to mimic the mechanisms of biological brains and as such, we argue, may lead to machines with abilities closer to those of biological species.
Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform
Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B.
2016-01-01
Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks. PMID:26909015
Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform.
Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B
2016-01-01
Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks.
Boyd, Hazel C; Evans, Nina M; Orpwood, Roger D; Harris, Nigel D
2017-05-01
Objectives To investigate the relative effectiveness of different prompts for people with dementia during multistep tasks in the home, to inform prompting technology design. Methods Nine pairs of participants (one with dementia and a partner or relative) participated at home. The participants with mild to moderate dementia (5M/4F, aged 73-86 years) functioned at the Planned or Exploratory levels of the Pool Activity Level instrument. A touchscreen computer displayed different prompts during two set tasks: "card-and-envelope" and "CD player." The trials were scored to establish the relative effectiveness of the prompts. Individual tasks were also explored. Results Text and audio prompts were each more effective than video or picture prompts for a card-and-envelope task, but this was not seen in a CD player task. The differences may be related to the type of actions within the tasks; the card-and-envelope actions were easier to convey verbally; the CD player actions lent themselves to visual prompts. Conclusions Designers of technology-based prompts for people with dementia should consider that the effectiveness of different prompts is likely to be task dependent. Familiar, unambiguous language can increase the success of tailored prompts. There are significant practical challenges associated with choosing and deconstructing everyday tasks at home.
The UAB Informatics Institute and 2016 CEGS N-GRID de-identification shared task challenge.
Bui, Duy Duc An; Wyatt, Mathew; Cimino, James J
2017-11-01
Clinical narratives (the text notes found in patients' medical records) are important information sources for secondary use in research. However, in order to protect patient privacy, they must be de-identified prior to use. Manual de-identification is considered to be the gold standard approach but is tedious, expensive, slow, and impractical for use with large-scale clinical data. Automated or semi-automated de-identification using computer algorithms is a potentially promising alternative. The Informatics Institute of the University of Alabama at Birmingham is applying de-identification to clinical data drawn from the UAB hospital's electronic medical records system before releasing them for research. We participated in a shared task challenge by the Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-Scale and RDoC Individualized Domains (N-GRID) at the de-identification regular track to gain experience developing our own automatic de-identification tool. We focused on the popular and successful methods from previous challenges: rule-based, dictionary-matching, and machine-learning approaches. We also explored new techniques such as disambiguation rules, term ambiguity measurement, and used multi-pass sieve framework at a micro level. For the challenge's primary measure (strict entity), our submissions achieved competitive results (f-measures: 87.3%, 87.1%, and 86.7%). For our preferred measure (binary token HIPAA), our submissions achieved superior results (f-measures: 93.7%, 93.6%, and 93%). With those encouraging results, we gain the confidence to improve and use the tool for the real de-identification task at the UAB Informatics Institute. Copyright © 2017 Elsevier Inc. All rights reserved.
Supporting Teachers in Structuring Mathematics Lessons Involving Challenging Tasks
ERIC Educational Resources Information Center
Sullivan, Peter; Askew, Mike; Cheeseman, Jill; Clarke, Doug; Mornane, Angela; Roche, Anne; Walker, Nadia
2015-01-01
The following is a report on an investigation into ways of supporting teachers in converting challenging mathematics tasks into classroom lessons and supporting students in engaging with those tasks. Groups of primary and secondary teachers, respectively, were provided with documentation of ten lessons built around challenging tasks. Teachers…
Srikesavan, Cynthia Swarnalatha; Shay, Barbara; Robinson, David B; Szturm, Tony
2013-03-09
Significant restriction in the ability to participate in home, work and community life results from pain, fatigue, joint damage, stiffness and reduced joint range of motion and muscle strength in people with rheumatoid arthritis or osteoarthritis of the hand. With modest evidence on the therapeutic effectiveness of conventional hand exercises, a task-oriented training program via real life object manipulations has been developed for people with arthritis. An innovative, computer-based gaming platform that allows a broad range of common objects to be seamlessly transformed into therapeutic input devices through instrumentation with a motion-sense mouse has also been designed. Personalized objects are selected to target specific training goals such as graded finger mobility, strength, endurance or fine/gross dexterous functions. The movements and object manipulation tasks that replicate common situations in everyday living will then be used to control and play any computer game, making practice challenging and engaging. The ongoing study is a 6-week, single-center, parallel-group, equally allocated and assessor-blinded pilot randomized controlled trial. Thirty people with rheumatoid arthritis or osteoarthritis affecting the hand will be randomized to receive either conventional hand exercises or the task-oriented training. The purpose is to determine a preliminary estimation of therapeutic effectiveness and feasibility of the task-oriented training program. Performance based and self-reported hand function, and exercise compliance are the study outcomes. Changes in outcomes (pre to post intervention) within each group will be assessed by paired Student t test or Wilcoxon signed-rank test and between groups (control versus experimental) post intervention using unpaired Student t test or Mann-Whitney U test. The study findings will inform decisions on the feasibility, safety and completion rate and will also provide preliminary data on the treatment effects of the task-oriented training compared with conventional hand exercises in people with rheumatoid arthritis or osteoarthritis of the hand. ClinicalTrials.gov: NCT01635582.
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
Symbolic Computation of Strongly Connected Components Using Saturation
NASA Technical Reports Server (NTRS)
Zhao, Yang; Ciardo, Gianfranco
2010-01-01
Finding strongly connected components (SCCs) in the state-space of discrete-state models is a critical task in formal verification of LTL and fair CTL properties, but the potentially huge number of reachable states and SCCs constitutes a formidable challenge. This paper is concerned with computing the sets of states in SCCs or terminal SCCs of asynchronous systems. Because of its advantages in many applications, we employ saturation on two previously proposed approaches: the Xie-Beerel algorithm and transitive closure. First, saturation speeds up state-space exploration when computing each SCC in the Xie-Beerel algorithm. Then, our main contribution is a novel algorithm to compute the transitive closure using saturation. Experimental results indicate that our improved algorithms achieve a clear speedup over previous algorithms in some cases. With the help of the new transitive closure computation algorithm, up to 10(exp 150) SCCs can be explored within a few seconds.
NASA Astrophysics Data System (ADS)
Ghaly, Michael; Du, Yong; Links, Jonathan M.; Frey, Eric C.
2016-03-01
In SPECT imaging, collimators are a major factor limiting image quality and largely determine the noise and resolution of SPECT images. In this paper, we seek the collimator with the optimal tradeoff between image noise and resolution with respect to performance on two tasks related to myocardial perfusion SPECT: perfusion defect detection and joint detection and localization. We used the Ideal Observer (IO) operating on realistic background-known-statistically (BKS) and signal-known-exactly (SKE) data. The areas under the receiver operating characteristic (ROC) and localization ROC (LROC) curves (AUCd, AUCd+l), respectively, were used as the figures of merit for both tasks. We used a previously developed population of 54 phantoms based on the eXtended Cardiac Torso Phantom (XCAT) that included variations in gender, body size, heart size and subcutaneous adipose tissue level. For each phantom, organ uptakes were varied randomly based on distributions observed in patient data. We simulated perfusion defects at six different locations with extents and severities of 10% and 25%, respectively, which represented challenging but clinically relevant defects. The extent and severity are, respectively, the perfusion defect’s fraction of the myocardial volume and reduction of uptake relative to the normal myocardium. Projection data were generated using an analytical projector that modeled attenuation, scatter, and collimator-detector response effects, a 9% energy resolution at 140 keV, and a 4 mm full-width at half maximum (FWHM) intrinsic spatial resolution. We investigated a family of eight parallel-hole collimators that spanned a large range of sensitivity-resolution tradeoffs. For each collimator and defect location, the IO test statistics were computed using a Markov Chain Monte Carlo (MCMC) method for an ensemble of 540 pairs of defect-present and -absent images that included the aforementioned anatomical and uptake variability. Sets of test statistics were computed for both tasks and analyzed using ROC and LROC analysis methodologies. The results of this study suggest that collimators with somewhat poorer resolution and higher sensitivity than those of a typical low-energy high-resolution (LEHR) collimator were optimal for both defect detection and joint detection and localization tasks in myocardial perfusion SPECT for the range of defect sizes investigated. This study also indicates that optimizing instrumentation for a detection task may provide near-optimal performance on the more challenging detection-localization task.
Tian, Yu; Bian, Yulong; Han, Piguo; Wang, Peng; Gao, Fengqiang; Chen, Yingmin
2017-01-01
Flow is the experience of effortless attention, reduced self-consciousness, and a deep sense of control that typically occurs during the optimal performance of challenging tasks. On the basis of the person-artifact-task model, we selected computer games (tasks) with varying levels of difficulty (difficult, medium, and easy) and shyness (personality) as flow precursors to study the physiological activity of users in a flow state. Cardiac and respiratory activity and mean changes in skin conductance (SC) were measured continuously while the participants ( n = 40) played the games. Moreover, the associations between self-reported psychological flow and physiological measures were investigated through a series of repeated-measures analyses. The results showed that the flow experience is related to a faster respiratory rate, deeper respiration, moderate heart rate (HR), moderate HR variability, and moderate SC. The main effect of shyness was non-significant, whereas the interaction of shyness and difficulty influenced the flow experience. These findings are discussed in relation to current models of arousal and valence. The results indicate that the flow state is a state of moderate mental effort that arises through the increased parasympathetic modulation of sympathetic activity.
NASA Astrophysics Data System (ADS)
Hassan, A. H.; Fluke, C. J.; Barnes, D. G.
2012-09-01
Upcoming and future astronomy research facilities will systematically generate terabyte-sized data sets moving astronomy into the Petascale data era. While such facilities will provide astronomers with unprecedented levels of accuracy and coverage, the increases in dataset size and dimensionality will pose serious computational challenges for many current astronomy data analysis and visualization tools. With such data sizes, even simple data analysis tasks (e.g. calculating a histogram or computing data minimum/maximum) may not be achievable without access to a supercomputing facility. To effectively handle such dataset sizes, which exceed today's single machine memory and processing limits, we present a framework that exploits the distributed power of GPUs and many-core CPUs, with a goal of providing data analysis and visualizing tasks as a service for astronomers. By mixing shared and distributed memory architectures, our framework effectively utilizes the underlying hardware infrastructure handling both batched and real-time data analysis and visualization tasks. Offering such functionality as a service in a “software as a service” manner will reduce the total cost of ownership, provide an easy to use tool to the wider astronomical community, and enable a more optimized utilization of the underlying hardware infrastructure.
Tian, Yu; Bian, Yulong; Han, Piguo; Wang, Peng; Gao, Fengqiang; Chen, Yingmin
2017-01-01
Flow is the experience of effortless attention, reduced self-consciousness, and a deep sense of control that typically occurs during the optimal performance of challenging tasks. On the basis of the person–artifact–task model, we selected computer games (tasks) with varying levels of difficulty (difficult, medium, and easy) and shyness (personality) as flow precursors to study the physiological activity of users in a flow state. Cardiac and respiratory activity and mean changes in skin conductance (SC) were measured continuously while the participants (n = 40) played the games. Moreover, the associations between self-reported psychological flow and physiological measures were investigated through a series of repeated-measures analyses. The results showed that the flow experience is related to a faster respiratory rate, deeper respiration, moderate heart rate (HR), moderate HR variability, and moderate SC. The main effect of shyness was non-significant, whereas the interaction of shyness and difficulty influenced the flow experience. These findings are discussed in relation to current models of arousal and valence. The results indicate that the flow state is a state of moderate mental effort that arises through the increased parasympathetic modulation of sympathetic activity. PMID:28725206
Generalizing the dynamic field theory of spatial cognition across real and developmental time scales
Simmering, Vanessa R.; Spencer, John P.; Schutte, Anne R.
2008-01-01
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective. PMID:17716632
Demystifying Multitask Deep Neural Networks for Quantitative Structure-Activity Relationships.
Xu, Yuting; Ma, Junshui; Liaw, Andy; Sheridan, Robert P; Svetnik, Vladimir
2017-10-23
Deep neural networks (DNNs) are complex computational models that have found great success in many artificial intelligence applications, such as computer vision1,2 and natural language processing.3,4 In the past four years, DNNs have also generated promising results for quantitative structure-activity relationship (QSAR) tasks.5,6 Previous work showed that DNNs can routinely make better predictions than traditional methods, such as random forests, on a diverse collection of QSAR data sets. It was also found that multitask DNN models-those trained on and predicting multiple QSAR properties simultaneously-outperform DNNs trained separately on the individual data sets in many, but not all, tasks. To date there has been no satisfactory explanation of why the QSAR of one task embedded in a multitask DNN can borrow information from other unrelated QSAR tasks. Thus, using multitask DNNs in a way that consistently provides a predictive advantage becomes a challenge. In this work, we explored why multitask DNNs make a difference in predictive performance. Our results show that during prediction a multitask DNN does borrow "signal" from molecules with similar structures in the training sets of the other tasks. However, whether this borrowing leads to better or worse predictive performance depends on whether the activities are correlated. On the basis of this, we have developed a strategy to use multitask DNNs that incorporate prior domain knowledge to select training sets with correlated activities, and we demonstrate its effectiveness on several examples.
A scientific workflow framework for (13)C metabolic flux analysis.
Dalman, Tolga; Wiechert, Wolfgang; Nöh, Katharina
2016-08-20
Metabolic flux analysis (MFA) with (13)C labeling data is a high-precision technique to quantify intracellular reaction rates (fluxes). One of the major challenges of (13)C MFA is the interactivity of the computational workflow according to which the fluxes are determined from the input data (metabolic network model, labeling data, and physiological rates). Here, the workflow assembly is inevitably determined by the scientist who has to consider interacting biological, experimental, and computational aspects. Decision-making is context dependent and requires expertise, rendering an automated evaluation process hardly possible. Here, we present a scientific workflow framework (SWF) for creating, executing, and controlling on demand (13)C MFA workflows. (13)C MFA-specific tools and libraries, such as the high-performance simulation toolbox 13CFLUX2, are wrapped as web services and thereby integrated into a service-oriented architecture. Besides workflow steering, the SWF features transparent provenance collection and enables full flexibility for ad hoc scripting solutions. To handle compute-intensive tasks, cloud computing is supported. We demonstrate how the challenges posed by (13)C MFA workflows can be solved with our approach on the basis of two proof-of-concept use cases. Copyright © 2015 Elsevier B.V. All rights reserved.
Simulation: Moving from Technology Challenge to Human Factors Success
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gould, Derek A., E-mail: dgould@liv.ac.uk; Chalmers, Nicholas; Johnson, Sheena J.
2012-06-15
Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used.
ERIC Educational Resources Information Center
Besner, Derek; O'Malley, Shannon
2009-01-01
J. C. Ziegler, C. Perry, and M. Zorzi (2009) have claimed that their connectionist dual process model (CDP+) can simulate the data reported by S. O'Malley and D. Besner. Most centrally, they have claimed that the model simulates additive effects of stimulus quality and word frequency on the time to read aloud when words and nonwords are randomly…
Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.
NASA Technical Reports Server (NTRS)
Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry
1998-01-01
Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.
Wynden, Rob; Anderson, Nick; Casale, Marco; Lakshminarayanan, Prakash; Anderson, Kent; Prosser, Justin; Errecart, Larry; Livshits, Alice; Thimman, Tim; Weiner, Mark
2011-01-01
Within the CTSA (Clinical Translational Sciences Awards) program, academic medical centers are tasked with the storage of clinical formulary data within an Integrated Data Repository (IDR) and the subsequent exposure of that data over grid computing environments for hypothesis generation and cohort selection. Formulary data collected over long periods of time across multiple institutions requires normalization of terms before those data sets can be aggregated and compared. This paper sets forth a solution to the challenge of generating derived aggregated normalized views from large, distributed data sets of clinical formulary data intended for re-use within clinical translational research.
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
NASA Astrophysics Data System (ADS)
Santagati, C.; Inzerillo, L.; Di Paola, F.
2013-07-01
3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.
Understanding Shale Gas: Recent Progress and Remaining Challenges
Striolo, Alberto; Cole, David R.
2017-08-27
Because of a number of technological advancements, unconventional hydrocarbons, and in particular shale gas, have transformed the US economy. Much is being learned, as demonstrated by the reduced cost of extracting shale gas in the US over the past five years. However, a number of challenges still need to be addressed. Many of these challenges represent grand scientific and technological tasks, overcoming which will have a number of positive impacts, ranging from the reduction of the environmental footprint of shale gas production to improvements and leaps forward in diverse sectors, including chemical manufacturing and catalytic transformations. This review addresses recentmore » advancements in computational and experimental approaches, which led to improved understanding of, in particular, structure and transport of fluids, including hydrocarbons, electrolytes, water, and CO 2 in heterogeneous subsurface rocks such as those typically found in shale formations. Finally, the narrative is concluded with a suggestion of a few research directions that, by synergistically combining computational and experimental advances, could allow us to overcome some of the hurdles that currently hinder the production of hydrocarbons from shale formations.« less
Petascale Many Body Methods for Complex Correlated Systems
NASA Astrophysics Data System (ADS)
Pruschke, Thomas
2012-02-01
Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.
Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239
Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.
Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N
2017-03-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.
Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.
2016-01-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692
Kellmeyer, Philipp; Cochrane, Thomas; Müller, Oliver; Mitchell, Christine; Ball, Tonio; Fins, Joseph J; Biller-Andorno, Nikola
2016-10-01
Closed-loop medical devices such as brain-computer interfaces are an emerging and rapidly advancing neurotechnology. The target patients for brain-computer interfaces (BCIs) are often severely paralyzed, and thus particularly vulnerable in terms of personal autonomy, decisionmaking capacity, and agency. Here we analyze the effects of closed-loop medical devices on the autonomy and accountability of both persons (as patients or research participants) and neurotechnological closed-loop medical systems. We show that although BCIs can strengthen patient autonomy by preserving or restoring communicative abilities and/or motor control, closed-loop devices may also create challenges for moral and legal accountability. We advocate the development of a comprehensive ethical and legal framework to address the challenges of emerging closed-loop neurotechnologies like BCIs and stress the centrality of informed consent and refusal as a means to foster accountability. We propose the creation of an international neuroethics task force with members from medical neuroscience, neuroengineering, computer science, medical law, and medical ethics, as well as representatives of patient advocacy groups and the public.
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.
Falls Risk and Simulated Driving Performance in Older Adults
Gaspar, John G.; Neider, Mark B.; Kramer, Arthur F.
2013-01-01
Declines in executive function and dual-task performance have been related to falls in older adults, and recent research suggests that older adults at risk for falls also show impairments on real-world tasks, such as crossing a street. The present study examined whether falls risk was associated with driving performance in a high-fidelity simulator. Participants were classified as high or low falls risk using the Physiological Profile Assessment and completed a number of challenging simulated driving assessments in which they responded quickly to unexpected events. High falls risk drivers had slower response times (~2.1 seconds) to unexpected events compared to low falls risk drivers (~1.7 seconds). Furthermore, when asked to perform a concurrent cognitive task while driving, high falls risk drivers showed greater costs to secondary task performance than did low falls risk drivers, and low falls risk older adults also outperformed high falls risk older adults on a computer-based measure of dual-task performance. Our results suggest that attentional differences between high and low falls risk older adults extend to simulated driving performance. PMID:23509627
Higher Intelligence Is Associated with Less Task-Related Brain Network Reconfiguration
Cole, Michael W.
2016-01-01
The human brain is able to exceed modern computers on multiple computational demands (e.g., language, planning) using a small fraction of the energy. The mystery of how the brain can be so efficient is compounded by recent evidence that all brain regions are constantly active as they interact in so-called resting-state networks (RSNs). To investigate the brain's ability to process complex cognitive demands efficiently, we compared functional connectivity (FC) during rest and multiple highly distinct tasks. We found previously that RSNs are present during a wide variety of tasks and that tasks only minimally modify FC patterns throughout the brain. Here, we tested the hypothesis that, although subtle, these task-evoked FC updates from rest nonetheless contribute strongly to behavioral performance. One might expect that larger changes in FC reflect optimization of networks for the task at hand, improving behavioral performance. Alternatively, smaller changes in FC could reflect optimization for efficient (i.e., small) network updates, reducing processing demands to improve behavioral performance. We found across three task domains that high-performing individuals exhibited more efficient brain connectivity updates in the form of smaller changes in functional network architecture between rest and task. These smaller changes suggest that individuals with an optimized intrinsic network configuration for domain-general task performance experience more efficient network updates generally. Confirming this, network update efficiency correlated with general intelligence. The brain's reconfiguration efficiency therefore appears to be a key feature contributing to both its network dynamics and general cognitive ability. SIGNIFICANCE STATEMENT The brain's network configuration varies based on current task demands. For example, functional brain connections are organized in one way when one is resting quietly but in another way if one is asked to make a decision. We found that the efficiency of these updates in brain network organization is positively related to general intelligence, the ability to perform a wide variety of cognitively challenging tasks well. Specifically, we found that brain network configuration at rest was already closer to a wide variety of task configurations in intelligent individuals. This suggests that the ability to modify network connectivity efficiently when task demands change is a hallmark of high intelligence. PMID:27535904
Cloud computing task scheduling strategy based on improved differential evolution algorithm
NASA Astrophysics Data System (ADS)
Ge, Junwei; He, Qian; Fang, Yiqiu
2017-04-01
In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.
Parallelization of NAS Benchmarks for Shared Memory Multiprocessors
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.
High-Performance Signal Detection for Adverse Drug Events using MapReduce Paradigm.
Fan, Kai; Sun, Xingzhi; Tao, Ying; Xu, Linhao; Wang, Chen; Mao, Xianling; Peng, Bo; Pan, Yue
2010-11-13
Post-marketing pharmacovigilance is important for public health, as many Adverse Drug Events (ADEs) are unknown when those drugs were approved for marketing. However, due to the large number of reported drugs and drug combinations, detecting ADE signals by mining these reports is becoming a challenging task in terms of computational complexity. Recently, a parallel programming model, MapReduce has been introduced by Google to support large-scale data intensive applications. In this study, we proposed a MapReduce-based algorithm, for common ADE detection approach, Proportional Reporting Ratio (PRR), and tested it in mining spontaneous ADE reports from FDA. The purpose is to investigate the possibility of using MapReduce principle to speed up biomedical data mining tasks using this pharmacovigilance case as one specific example. The results demonstrated that MapReduce programming model could improve the performance of common signal detection algorithm for pharmacovigilance in a distributed computation environment at approximately liner speedup rates.
Introducing Challenging Tasks: Inviting and Clarifying without Explaining and Demonstrating
ERIC Educational Resources Information Center
Cheeseman, Jill; Clarke, Doug; Roche, Anne; Walker, Nadia
2016-01-01
Introducing challenging tasks in such a way that makes them accessible, rather than daunting, to students is a challenge for teachers. Solving challenging tasks involves students having to grapple with the problem. The role of the teacher is to motivate and clarify the problem rather than showing students how to solve the problem.
Myden, C A; Anglin, C; Kopp, G D; Hutchison, C R
2012-01-01
Orthopaedic residents typically learn to perform total knee arthroplasty (TKA) through an apprenticeship-type model, which is a necessarily slow process. Surgical skills courses, using artificial bones, have been shown to improve technical and cognitive skills significantly within a couple of days. The addition of computer-assisted surgery (CAS) simulations challenges the participants to consider the same task in a different context, promoting cognitive flexibility. We designed a hands-on educational intervention for junior residents with a conventional tibiofemoral TKA station, two different tibiofemoral CAS stations, and a CAS and conventional patellar resection station, including both qualitative and quantitative analyses. Qualitatively, structured interviews before and after the course were analyzed for recurring themes. Quantitatively, subjects were evaluated on their technical skills before and after the course, and on a multiple-choice knowledge test and error detection test after the course, in comparison to senior residents who performed only the testing. Four themes emerged: confidence, awareness, deepening knowledge and changed perspectives. The residents' attitudes to CAS changed from negative before the course to neutral or positive afterwards. The junior resident group completed 23% of tasks in the pre-course skills test and 75% of tasks on the post-test (p<0.01), compared to 45% of tasks completed by the senior resident group. High-impact educational interventions, promoting cognitive flexibility, would benefit trainees, attending surgeons, the healthcare system and patients.
Influence of computer work under time pressure on cardiac activity.
Shi, Ping; Hu, Sijung; Yu, Hongliu
2015-03-01
Computer users are often under stress when required to complete computer work within a required time. Work stress has repeatedly been associated with an increased risk for cardiovascular disease. The present study examined the effects of time pressure workload during computer tasks on cardiac activity in 20 healthy subjects. Heart rate, time domain and frequency domain indices of heart rate variability (HRV) and Poincaré plot parameters were compared among five computer tasks and two rest periods. Faster heart rate and decreased standard deviation of R-R interval were noted in response to computer tasks under time pressure. The Poincaré plot parameters showed significant differences between different levels of time pressure workload during computer tasks, and between computer tasks and the rest periods. In contrast, no significant differences were identified for the frequency domain indices of HRV. The results suggest that the quantitative Poincaré plot analysis used in this study was able to reveal the intrinsic nonlinear nature of the autonomically regulated cardiac rhythm. Specifically, heightened vagal tone occurred during the relaxation computer tasks without time pressure. In contrast, the stressful computer tasks with added time pressure stimulated cardiac sympathetic activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chlebana, Frank; CMS Collaboration
2017-11-01
The challenges of the High-Luminosity LHC (HL-LHC) are driven by the large number of overlapping proton-proton collisions (pileup) in each bunch-crossing and the extreme radiation dose to detectors at high pseudorapidity. To overcome this challenge CMS is developing an endcap electromagnetic+hadronic sampling calorimeter employing silicon sensors in the electromagnetic and front hadronic sections, comprising over 6 million channels, and highly-segmented plastic scintillators in the rear part of the hadronic section. This High- Granularity Calorimeter (HGCAL) will be the first of its kind used in a colliding beam experiment. Clustering deposits of energy over many cells and layers is a complex and challenging computational task, particularly in the high-pileup environment of HL-LHC. Baseline detector performance results are presented for electromagnetic and hadronic objects, and studies demonstrating the advantages of fine longitudinal and transverse segmentation are explored.
Challenges facing the development of the Arabic chatbot
NASA Astrophysics Data System (ADS)
AlHagbani, Eman Saad; Khan, Muhammad Badruddin
2016-07-01
The future information systems are expected to be more intelligent and will take human queries in natural language as input and answer them promptly. To develop a chatbot or a computer program that can chat with humans in realistic manner to extent that human get impressions that he/she is talking with other human is a challenging task. To make such chatbots, different technologies will work together ranging from artificial intelligence to development of semantic resources. Sophisticated chatbots are developed to perform conversation in number of languages. Arabic chatbots can be helpful in automating many operations and serve people who only know Arabic language. However, the technology for Arabic language is still in its infancy stage due to some challenges surrounding the Arabic language. This paper offers an overview of the chatbot application and the several obstacles and challenges that need to be resolved to develop an effective Arabic chatbot.
Visser, Marco D.; McMahon, Sean M.; Merow, Cory; Dixon, Philip M.; Record, Sydne; Jongejans, Eelke
2015-01-01
Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof) that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1–S3 Texts) that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster). By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research. PMID:25811842
Efficient Memory Access with NumPy Global Arrays using Local Memory Access
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Jeffrey A.; Berghofer, Dan C.
This paper discusses the work completed working with Global Arrays of data on distributed multi-computer systems and improving their performance. The tasks completed were done at Pacific Northwest National Laboratory in the Science Undergrad Laboratory Internship program in the summer of 2013 for the Data Intensive Computing Group in the Fundamental and Computational Sciences DIrectorate. This work was done on the Global Arrays Toolkit developed by this group. This toolkit is an interface for programmers to more easily create arrays of data on networks of computers. This is useful because scientific computation is often done on large amounts of datamore » sometimes so large that individual computers cannot hold all of it. This data is held in array form and can best be processed on supercomputers which often consist of a network of individual computers doing their computation in parallel. One major challenge for this sort of programming is that operations on arrays on multiple computers is very complex and an interface is needed so that these arrays seem like they are on a single computer. This is what global arrays does. The work done here is to use more efficient operations on that data that requires less copying of data to be completed. This saves a lot of time because copying data on many different computers is time intensive. The way this challenge was solved is when data to be operated on with binary operations are on the same computer, they are not copied when they are accessed. When they are on separate computers, only one set is copied when accessed. This saves time because of less copying done although more data access operations were done.« less
Challenges in Securing the Interface Between the Cloud and Pervasive Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagesse, Brent J
2011-01-01
Cloud computing presents an opportunity for pervasive systems to leverage computational and storage resources to accomplish tasks that would not normally be possible on such resource-constrained devices. Cloud computing can enable hardware designers to build lighter systems that last longer and are more mobile. Despite the advantages cloud computing offers to the designers of pervasive systems, there are some limitations of leveraging cloud computing that must be addressed. We take the position that cloud-based pervasive system must be secured holistically and discuss ways this might be accomplished. In this paper, we discuss a pervasive system utilizing cloud computing resources andmore » issues that must be addressed in such a system. In this system, the user's mobile device cannot always have network access to leverage resources from the cloud, so it must make intelligent decisions about what data should be stored locally and what processes should be run locally. As a result of these decisions, the user becomes vulnerable to attacks while interfacing with the pervasive system.« less
The future of scientific workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Peterka, Tom; Altintas, Ilkay
Today’s computational, experimental, and observational sciences rely on computations that involve many related tasks. The success of a scientific mission often hinges on the computer automation of these workflows. In April 2015, the US Department of Energy (DOE) invited a diverse group of domain and computer scientists from national laboratories supported by the Office of Science, the National Nuclear Security Administration, from industry, and from academia to review the workflow requirements of DOE’s science and national security missions, to assess the current state of the art in science workflows, to understand the impact of emerging extreme-scale computing systems on thosemore » workflows, and to develop requirements for automated workflow management in future and existing environments. This article is a summary of the opinions of over 50 leading researchers attending this workshop. We highlight use cases, computing systems, workflow needs and conclude by summarizing the remaining challenges this community sees that inhibit large-scale scientific workflows from becoming a mainstream tool for extreme-scale science.« less
Biomedical image analysis and processing in clouds
NASA Astrophysics Data System (ADS)
Bednarz, Tomasz; Szul, Piotr; Arzhaeva, Yulia; Wang, Dadong; Burdett, Neil; Khassapov, Alex; Chen, Shiping; Vallotton, Pascal; Lagerstrom, Ryan; Gureyev, Tim; Taylor, John
2013-10-01
Cloud-based Image Analysis and Processing Toolbox project runs on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) cloud infrastructure and allows access to biomedical image processing and analysis services to researchers via remotely accessible user interfaces. By providing user-friendly access to cloud computing resources and new workflow-based interfaces, our solution enables researchers to carry out various challenging image analysis and reconstruction tasks. Several case studies will be presented during the conference.
ERIC Educational Resources Information Center
Brown, Lori Jill
2012-01-01
For nurses or physicians practicing in any healthcare setting today, nothing seems to be as unsettling then change associated with the introduction of new information technology. The need for information technology has created a new host of challenges that do not easily align to clinical practice. In this study, perceptions of usefulness, ease of…
Automated essay scoring and the future of educational assessment in medical education.
Gierl, Mark J; Latifi, Syed; Lai, Hollis; Boulais, André-Philippe; De Champlain, André
2014-10-01
Constructed-response tasks, which range from short-answer tests to essay questions, are included in assessments of medical knowledge because they allow educators to measure students' ability to think, reason, solve complex problems, communicate and collaborate through their use of writing. However, constructed-response tasks are also costly to administer and challenging to score because they rely on human raters. One alternative to the manual scoring process is to integrate computer technology with writing assessment. The process of scoring written responses using computer programs is known as 'automated essay scoring' (AES). An AES system uses a computer program that builds a scoring model by extracting linguistic features from a constructed-response prompt that has been pre-scored by human raters and then, using machine learning algorithms, maps the linguistic features to the human scores so that the computer can be used to classify (i.e. score or grade) the responses of a new group of students. The accuracy of the score classification can be evaluated using different measures of agreement. Automated essay scoring provides a method for scoring constructed-response tests that complements the current use of selected-response testing in medical education. The method can serve medical educators by providing the summative scores required for high-stakes testing. It can also serve medical students by providing them with detailed feedback as part of a formative assessment process. Automated essay scoring systems yield scores that consistently agree with those of human raters at a level as high, if not higher, as the level of agreement among human raters themselves. The system offers medical educators many benefits for scoring constructed-response tasks, such as improving the consistency of scoring, reducing the time required for scoring and reporting, minimising the costs of scoring, and providing students with immediate feedback on constructed-response tasks. © 2014 John Wiley & Sons Ltd.
On the convergence of nanotechnology and Big Data analysis for computer-aided diagnosis.
Rodrigues, Jose F; Paulovich, Fernando V; de Oliveira, Maria Cf; de Oliveira, Osvaldo N
2016-04-01
An overview is provided of the challenges involved in building computer-aided diagnosis systems capable of precise medical diagnostics based on integration and interpretation of data from different sources and formats. The availability of massive amounts of data and computational methods associated with the Big Data paradigm has brought hope that such systems may soon be available in routine clinical practices, which is not the case today. We focus on visual and machine learning analysis of medical data acquired with varied nanotech-based techniques and on methods for Big Data infrastructure. Because diagnosis is essentially a classification task, we address the machine learning techniques with supervised and unsupervised classification, making a critical assessment of the progress already made in the medical field and the prospects for the near future. We also advocate that successful computer-aided diagnosis requires a merge of methods and concepts from nanotechnology and Big Data analysis.
Requirements for fault-tolerant factoring on an atom-optics quantum computer.
Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae
2013-01-01
Quantum information processing and its associated technologies have reached a pivotal stage in their development, with many experiments having established the basic building blocks. Moving forward, the challenge is to scale up to larger machines capable of performing computational tasks not possible today. This raises questions that need to be urgently addressed, such as what resources these machines will consume and how large will they be. Here we estimate the resources required to execute Shor's factoring algorithm on an atom-optics quantum computer architecture. We determine the runtime and size of the computer as a function of the problem size and physical error rate. Our results suggest that once the physical error rate is low enough to allow quantum error correction, optimization to reduce resources and increase performance will come mostly from integrating algorithms and circuits within the error correction environment, rather than from improving the physical hardware.
Template Interfaces for Agile Parallel Data-Intensive Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.
Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less
Real-time depth processing for embedded platforms
NASA Astrophysics Data System (ADS)
Rahnama, Oscar; Makarov, Aleksej; Torr, Philip
2017-05-01
Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410
Evans, Nina M; Orpwood, Roger D; Harris, Nigel D
2015-01-01
Objectives To investigate the relative effectiveness of different prompts for people with dementia during multistep tasks in the home, to inform prompting technology design. Methods Nine pairs of participants (one with dementia and a partner or relative) participated at home. The participants with mild to moderate dementia (5M/4F, aged 73–86 years) functioned at the Planned or Exploratory levels of the Pool Activity Level instrument. A touchscreen computer displayed different prompts during two set tasks: “card-and-envelope” and “CD player.” The trials were scored to establish the relative effectiveness of the prompts. Individual tasks were also explored. Results Text and audio prompts were each more effective than video or picture prompts for a card-and-envelope task, but this was not seen in a CD player task. The differences may be related to the type of actions within the tasks; the card-and-envelope actions were easier to convey verbally; the CD player actions lent themselves to visual prompts. Conclusions Designers of technology-based prompts for people with dementia should consider that the effectiveness of different prompts is likely to be task dependent. Familiar, unambiguous language can increase the success of tailored prompts. There are significant practical challenges associated with choosing and deconstructing everyday tasks at home. PMID:26428634
Top-down modulation of ventral occipito-temporal responses during visual word recognition.
Twomey, Tae; Kawabata Duncan, Keith J; Price, Cathy J; Devlin, Joseph T
2011-04-01
Although interactivity is considered a fundamental principle of cognitive (and computational) models of reading, it has received far less attention in neural models of reading that instead focus on serial stages of feed-forward processing from visual input to orthographic processing to accessing the corresponding phonological and semantic information. In particular, the left ventral occipito-temporal (vOT) cortex is proposed to be the first stage where visual word recognition occurs prior to accessing nonvisual information such as semantics and phonology. We used functional magnetic resonance imaging (fMRI) to investigate whether there is evidence that activation in vOT is influenced top-down by the interaction of visual and nonvisual properties of the stimuli during visual word recognition tasks. Participants performed two different types of lexical decision tasks that focused on either visual or nonvisual properties of the word or word-like stimuli. The design allowed us to investigate how vOT activation during visual word recognition was influenced by a task change to the same stimuli and by a stimulus change during the same task. We found both stimulus- and task-driven modulation of vOT activation that can only be explained by top-down processing of nonvisual aspects of the task and stimuli. Our results are consistent with the hypothesis that vOT acts as an interface linking visual form with nonvisual processing in both bottom up and top down directions. Such interactive processing at the neural level is in agreement with cognitive and computational models of reading but challenges some of the assumptions made by current neuro-anatomical models of reading. Copyright © 2011 Elsevier Inc. All rights reserved.
Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.
Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit
2017-06-01
We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.
Experience in using commercial clouds in CMS
NASA Astrophysics Data System (ADS)
Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration
2017-10-01
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.
Experience in using commercial clouds in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauerdick, L.; Bockelman, B.; Dykstra, D.
Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less
Protecting genomic data analytics in the cloud: state of the art and opportunities.
Tang, Haixu; Jiang, Xiaoqian; Wang, Xiaofeng; Wang, Shuang; Sofia, Heidi; Fox, Dov; Lauter, Kristin; Malin, Bradley; Telenti, Amalio; Xiong, Li; Ohno-Machado, Lucila
2016-10-13
The outsourcing of genomic data into public cloud computing settings raises concerns over privacy and security. Significant advancements in secure computation methods have emerged over the past several years, but such techniques need to be rigorously evaluated for their ability to support the analysis of human genomic data in an efficient and cost-effective manner. With respect to public cloud environments, there are concerns about the inadvertent exposure of human genomic data to unauthorized users. In analyses involving multiple institutions, there is additional concern about data being used beyond agreed research scope and being prcoessed in untrused computational environments, which may not satisfy institutional policies. To systematically investigate these issues, the NIH-funded National Center for Biomedical Computing iDASH (integrating Data for Analysis, 'anonymization' and SHaring) hosted the second Critical Assessment of Data Privacy and Protection competition to assess the capacity of cryptographic technologies for protecting computation over human genomes in the cloud and promoting cross-institutional collaboration. Data scientists were challenged to design and engineer practical algorithms for secure outsourcing of genome computation tasks in working software, whereby analyses are performed only on encrypted data. They were also challenged to develop approaches to enable secure collaboration on data from genomic studies generated by multiple organizations (e.g., medical centers) to jointly compute aggregate statistics without sharing individual-level records. The results of the competition indicated that secure computation techniques can enable comparative analysis of human genomes, but greater efficiency (in terms of compute time and memory utilization) are needed before they are sufficiently practical for real world environments.
Scalable cluster administration - Chiba City I approach and lessons learned.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, J. P.; Evard, R.; Nurmi, D.
2002-07-01
Systems administrators of large clusters often need to perform the same administrative activity hundreds or thousands of times. Often such activities are time-consuming, especially the tasks of installing and maintaining software. By combining network services such as DHCP, TFTP, FTP, HTTP, and NFS with remote hardware control, cluster administrators can automate all administrative tasks. Scalable cluster administration addresses the following challenge: What systems design techniques can cluster builders use to automate cluster administration on very large clusters? We describe the approach used in the Mathematics and Computer Science Division of Argonne National Laboratory on Chiba City I, a 314-node Linuxmore » cluster; and we analyze the scalability, flexibility, and reliability benefits and limitations from that approach.« less
NASA Astrophysics Data System (ADS)
Chaurasia, Shilpi; Pieraccini, Stefano; De Gonda, Riccardo; Conti, Simone; Sironi, Maurizio
2013-11-01
Targetting protein-protein interactions is a challenging task in drug discovery process. Despite the challenges, several studies provided evidences for the development of small molecules modulating protein-protein interactions. Here we consider a typical case of protein-protein interaction stabilization: the complex between FKBP12 and FRB with rapamycin. We have analyzed the stability of the complex and characterized its interactions at the atomic level by performing free energy calculations and computational alanine scanning. It is shown that rapamycin stabilizes the complex by acting as a bridge between the two proteins; and the complex is stable only in the presence of rapamycin.
Children's Task Engagement during Challenging Puzzle Tasks
ERIC Educational Resources Information Center
Wang, Feihong; Algina, James; Snyder, Patricia; Cox, Martha
2017-01-01
We examined children's task engagement during a challenging puzzle task in the presence of their primary caregivers by using a representative sample of rural children from six high-poverty counties across two states. Weighted longitudinal confirmatory factor analysis and structural equation modeling were used to identify a task engagement factor…
Computer usage and task-switching during resident's working day: Disruptive or not?
Méan, Marie; Garnier, Antoine; Wenger, Nathalie; Castioni, Julien; Waeber, Gérard; Marques-Vidal, Pedro
2017-01-01
Recent implementation of electronic health records (EHR) has dramatically changed medical ward organization. While residents in general internal medicine use EHR systems half of their working time, whether computer usage impacts residents' workflow remains uncertain. We aimed to observe the frequency of task-switches occurring during resident's work and to assess whether computer usage was associated with task-switching. In a large Swiss academic university hospital, we conducted, between May 26 and July 24, 2015 a time-motion study to assess how residents in general internal medicine organize their working day. We observed 49 day and 17 evening shifts of 36 residents, amounting to 697 working hours. During day shifts, residents spent 5.4 hours using a computer (mean total working time: 11.6 hours per day). On average, residents switched 15 times per hour from a task to another. Task-switching peaked between 8:00-9:00 and 16:00-17:00. Task-switching was not associated with resident's characteristics and no association was found between task-switching and extra hours (Spearman r = 0.220, p = 0.137 for day and r = 0.483, p = 0.058 for evening shifts). Computer usage occurred more frequently at the beginning or ends of day shifts and was associated with decreased overall task-switching. Task-switching occurs very frequently during resident's working day. Despite the fact that residents used a computer half of their working time, computer usage was associated with decreased task-switching. Whether frequent task-switches and computer usage impact the quality of patient care and resident's work must be evaluated in further studies.
NASA Technical Reports Server (NTRS)
Wakim, Nagi T.; Srivastava, Sadanand; Bousaidi, Mehdi; Goh, Gin-Hua
1995-01-01
Agent-based technologies answer to several challenges posed by additional information processing requirements in today's computing environments. In particular, (1) users desire interaction with computing devices in a mode which is similar to that used between people, (2) the efficiency and successful completion of information processing tasks often require a high-level of expertise in complex and multiple domains, (3) information processing tasks often require handling of large volumes of data and, therefore, continuous and endless processing activities. The concept of an agent is an attempt to address these new challenges by introducing information processing environments in which (1) users can communicate with a system in a natural way, (2) an agent is a specialist and a self-learner and, therefore, it qualifies to be trusted to perform tasks independent of the human user, and (3) an agent is an entity that is continuously active performing tasks that are either delegated to it or self-imposed. The work described in this paper focuses on the development of an interface agent for users of a complex information processing environment (IPE). This activity is part of an on-going effort to build a model for developing agent-based information systems. Such systems will be highly applicable to environments which require a high degree of automation, such as, flight control operations and/or processing of large volumes of data in complex domains, such as the EOSDIS environment and other multidisciplinary, scientific data systems. The concept of an agent as an information processing entity is fully described with emphasis on characteristics of special interest to the User-System Interface Agent (USIA). Issues such as agent 'existence' and 'qualification' are discussed in this paper. Based on a definition of an agent and its main characteristics, we propose an architecture for the development of interface agents for users of an IPE that is agent-oriented and whose resources are likely to be distributed and heterogeneous in nature. The architecture of USIA is outlined in two main components: (1) the user interface which is concerned with issues as user dialog and interaction, user modeling, and adaptation to user profile, and (2) the system interface part which deals with identification of IPE capabilities, task understanding and feasibility assessment, and task delegation and coordination of assistant agents.
A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.
Xie, Zhiqiang; Shao, Xia; Xin, Yu
2016-01-01
To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective.
A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path
Xie, Zhiqiang; Shao, Xia; Xin, Yu
2016-01-01
To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective. PMID:27490901
Teaching Mathematics in Primary Schools with Challenging Tasks: The Big (Not So) Friendly Giant
ERIC Educational Resources Information Center
Russo, James
2016-01-01
The use of enabling and extending prompts allows tasks to be both accessible and challenging within a classroom. This article provides an example of how to use enabling and extending prompts effectively when employing a challenging task in Year 2.
Student Use of Physics to Make Sense of Incomplete but Functional VPython Programs in a Lab Setting
NASA Astrophysics Data System (ADS)
Weatherford, Shawn A.
2011-12-01
Computational activities in Matter & Interactions, an introductory calculus-based physics course, have the instructional goal of providing students with the experience of applying the same set of a small number of fundamental principles to model a wide range of physical systems. However there are significant instructional challenges for students to build computer programs under limited time constraints, especially for students who are unfamiliar with programming languages and concepts. Prior attempts at designing effective computational activities were successful at having students ultimately build working VPython programs under the tutelage of experienced teaching assistants in a studio lab setting. A pilot study revealed that students who completed these computational activities had significant difficultly repeating the exact same tasks and further, had difficulty predicting the animation that would be produced by the example program after interpreting the program code. This study explores the interpretation and prediction tasks as part of an instructional sequence where students are asked to read and comprehend a functional, but incomplete program. Rather than asking students to begin their computational tasks with modifying program code, we explicitly ask students to interpret an existing program that is missing key lines of code. The missing lines of code correspond to the algebraic form of fundamental physics principles or the calculation of forces which would exist between analogous physical objects in the natural world. Students are then asked to draw a prediction of what they would see in the simulation produced by the VPython program and ultimately run the program to evaluate the students' prediction. This study specifically looks at how the participants use physics while interpreting the program code and creating a whiteboard prediction. This study also examines how students evaluate their understanding of the program and modification goals at the beginning of the modification task. While working in groups over the course of a semester, study participants were recorded while they completed three activities using these incomplete programs. Analysis of the video data showed that study participants had little difficulty interpreting physics quantities, generating a prediction, or determining how to modify the incomplete program. Participants did not base their prediction solely from the information from the incomplete program. When participants tried to predict the motion of the objects in the simulation, many turned to their knowledge of how the system would evolve if it represented an analogous real-world physical system. For example, participants attributed the real-world behavior of springs to helix objects even though the program did not include calculations for the spring to exert a force when stretched. Participants rarely interpreted lines of code in the computational loop during the first computational activity, but this changed during latter computational activities with most participants using their physics knowledge to interpret the computational loop. Computational activities in the Matter & Interactions curriculum were revised in light of these findings to include an instructional sequence of tasks to build a comprehension of the example program. The modified activities also ask students to create an additional whiteboard prediction for the time-evolution of the real-world phenomena which the example program will eventually model. This thesis shows how comprehension tasks identified by Palinscar and Brown (1984) as effective in improving reading comprehension are also effective in helping students apply their physics knowledge to interpret a computer program which attempts to model a real-world phenomena and identify errors in their understanding of the use, or omission, of fundamental physics principles in a computational model.
Impact of topographic mask models on scanner matching solutions
NASA Astrophysics Data System (ADS)
Tyminski, Jacek K.; Pomplun, Jan; Renwick, Stephen P.
2014-03-01
Of keen interest to the IC industry are advanced computational lithography applications such as Optical Proximity Correction of IC layouts (OPC), scanner matching by optical proximity effect matching (OPEM), and Source Optimization (SO) and Source-Mask Optimization (SMO) used as advanced reticle enhancement techniques. The success of these tasks is strongly dependent on the integrity of the lithographic simulators used in computational lithography (CL) optimizers. Lithographic mask models used by these simulators are key drivers impacting the accuracy of the image predications, and as a consequence, determine the validity of these CL solutions. Much of the CL work involves Kirchhoff mask models, a.k.a. thin masks approximation, simplifying the treatment of the mask near-field images. On the other hand, imaging models for hyper-NA scanner require that the interactions of the illumination fields with the mask topography be rigorously accounted for, by numerically solving Maxwell's Equations. The simulators used to predict the image formation in the hyper-NA scanners must rigorously treat the masks topography and its interaction with the scanner illuminators. Such imaging models come at a high computational cost and pose challenging accuracy vs. compute time tradeoffs. Additional complication comes from the fact that the performance metrics used in computational lithography tasks show highly non-linear response to the optimization parameters. Finally, the number of patterns used for tasks such as OPC, OPEM, SO, or SMO range from tens to hundreds. These requirements determine the complexity and the workload of the lithography optimization tasks. The tools to build rigorous imaging optimizers based on first-principles governing imaging in scanners are available, but the quantifiable benefits they might provide are not very well understood. To quantify the performance of OPE matching solutions, we have compared the results of various imaging optimization trials obtained with Kirchhoff mask models to those obtained with rigorous models involving solutions of Maxwell's Equations. In both sets of trials, we used sets of large numbers of patterns, with specifications representative of CL tasks commonly encountered in hyper-NA imaging. In this report we present OPEM solutions based on various mask models and discuss the models' impact on hyper- NA scanner matching accuracy. We draw conclusions on the accuracy of results obtained with thin mask models vs. the topographic OPEM solutions. We present various examples representative of the scanner image matching for patterns representative of the current generation of IC designs.
Lasko, Thomas A; Denny, Joshua C; Levy, Mia A
2013-01-01
Inferring precise phenotypic patterns from population-scale clinical data is a core computational task in the development of precision, personalized medicine. The traditional approach uses supervised learning, in which an expert designates which patterns to look for (by specifying the learning task and the class labels), and where to look for them (by specifying the input variables). While appropriate for individual tasks, this approach scales poorly and misses the patterns that we don't think to look for. Unsupervised feature learning overcomes these limitations by identifying patterns (or features) that collectively form a compact and expressive representation of the source data, with no need for expert input or labeled examples. Its rising popularity is driven by new deep learning methods, which have produced high-profile successes on difficult standardized problems of object recognition in images. Here we introduce its use for phenotype discovery in clinical data. This use is challenging because the largest source of clinical data - Electronic Medical Records - typically contains noisy, sparse, and irregularly timed observations, rendering them poor substrates for deep learning methods. Our approach couples dirty clinical data to deep learning architecture via longitudinal probability densities inferred using Gaussian process regression. From episodic, longitudinal sequences of serum uric acid measurements in 4368 individuals we produced continuous phenotypic features that suggest multiple population subtypes, and that accurately distinguished (0.97 AUC) the uric-acid signatures of gout vs. acute leukemia despite not being optimized for the task. The unsupervised features were as accurate as gold-standard features engineered by an expert with complete knowledge of the domain, the classification task, and the class labels. Our findings demonstrate the potential for achieving computational phenotype discovery at population scale. We expect such data-driven phenotypes to expose unknown disease variants and subtypes and to provide rich targets for genetic association studies.
Lasko, Thomas A.; Denny, Joshua C.; Levy, Mia A.
2013-01-01
Inferring precise phenotypic patterns from population-scale clinical data is a core computational task in the development of precision, personalized medicine. The traditional approach uses supervised learning, in which an expert designates which patterns to look for (by specifying the learning task and the class labels), and where to look for them (by specifying the input variables). While appropriate for individual tasks, this approach scales poorly and misses the patterns that we don’t think to look for. Unsupervised feature learning overcomes these limitations by identifying patterns (or features) that collectively form a compact and expressive representation of the source data, with no need for expert input or labeled examples. Its rising popularity is driven by new deep learning methods, which have produced high-profile successes on difficult standardized problems of object recognition in images. Here we introduce its use for phenotype discovery in clinical data. This use is challenging because the largest source of clinical data – Electronic Medical Records – typically contains noisy, sparse, and irregularly timed observations, rendering them poor substrates for deep learning methods. Our approach couples dirty clinical data to deep learning architecture via longitudinal probability densities inferred using Gaussian process regression. From episodic, longitudinal sequences of serum uric acid measurements in 4368 individuals we produced continuous phenotypic features that suggest multiple population subtypes, and that accurately distinguished (0.97 AUC) the uric-acid signatures of gout vs. acute leukemia despite not being optimized for the task. The unsupervised features were as accurate as gold-standard features engineered by an expert with complete knowledge of the domain, the classification task, and the class labels. Our findings demonstrate the potential for achieving computational phenotype discovery at population scale. We expect such data-driven phenotypes to expose unknown disease variants and subtypes and to provide rich targets for genetic association studies. PMID:23826094
Human Factors in Streaming Data Analysis: Challenges and Opportunities for Information Visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Arendt, Dustin L.; Franklin, Lyndsey
State-of-the-art visual analytics models and frameworks mostly assume a static snapshot of the data, while in many cases it is a stream with constant updates and changes. Exploration of streaming data poses unique challenges as machine-level computations and abstractions need to be synchronized with the visual representation of the data and the temporally evolving human insights. In the visual analytics literature, we lack a thorough characterization of streaming data and analysis of the challenges associated with task abstraction, visualization design, and adaptation of the role of human-in-the-loop for exploration of data streams. We aim to fill this gap by conductingmore » a survey of the state-of-the-art in visual analytics of streaming data for systematically describing the contributions and shortcomings of current techniques and analyzing the research gaps that need to be addressed in the future. Our contributions are: i) problem characterization for identifying challenges that are unique to streaming data analysis tasks, ii) a survey and analysis of the state-of-the-art in streaming data visualization research with a focus on the visualization design space for dynamic data and the role of the human-in-the-loop, and iii) reflections on the design-trade-offs for streaming visual analytics techniques and their practical applicability in real-world application scenarios.« less
2014-01-01
The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called “big data” challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. The MapReduce programming framework uses two tasks common in functional programming: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by summarizing the potential usage of the MapReduce programming framework and Hadoop platform to process huge volumes of clinical data in medical health informatics related fields. PMID:25383096
Mohammed, Emad A; Far, Behrouz H; Naugler, Christopher
2014-01-01
The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called "big data" challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. THE MAPREDUCE PROGRAMMING FRAMEWORK USES TWO TASKS COMMON IN FUNCTIONAL PROGRAMMING: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by summarizing the potential usage of the MapReduce programming framework and Hadoop platform to process huge volumes of clinical data in medical health informatics related fields.
Model-based analyses: Promises, pitfalls, and example applications to the study of cognitive control
Mars, Rogier B.; Shea, Nicholas J.; Kolling, Nils; Rushworth, Matthew F. S.
2011-01-01
We discuss a recent approach to investigating cognitive control, which has the potential to deal with some of the challenges inherent in this endeavour. In a model-based approach, the researcher defines a formal, computational model that performs the task at hand and whose performance matches that of a research participant. The internal variables in such a model might then be taken as proxies for latent variables computed in the brain. We discuss the potential advantages of such an approach for the study of the neural underpinnings of cognitive control and its pitfalls, and we make explicit the assumptions underlying the interpretation of data obtained using this approach. PMID:20437297
How Does Lesson Structure Shape Teacher Perceptions of Teaching with Challenging Tasks?
ERIC Educational Resources Information Center
Russo, James; Hopkins, Sarah
2017-01-01
Despite reforms in mathematics education, many teachers remain reluctant to incorporate challenging (i.e., more cognitively demanding) tasks into their mathematics instruction. The current study examines how lesson structure shapes teacher perceptions of teaching with challenging tasks. Participants included three Year 1/2 classroom teachers who…
Task Selection, Task Switching and Multitasking during Computer-Based Independent Study
ERIC Educational Resources Information Center
Judd, Terry
2015-01-01
Detailed logs of students' computer use, during independent study sessions, were captured in an open-access computer laboratory. Each log consisted of a chronological sequence of tasks representing either the application or the Internet domain displayed in the workstation's active window. Each task was classified using a three-tier schema…
NASA Technical Reports Server (NTRS)
Smith, M. E.; Gevins, A.; Brown, H.; Karnik, A.; Du, R.
2001-01-01
Electroencephalographic (EEG) recordings were made while 16 participants performed versions of a personal-computer-based flight simulation task of low, moderate, or high difficulty. As task difficulty increased, frontal midline theta EEG activity increased and alpha band activity decreased. A participant-specific function that combined multiple EEG features to create a single load index was derived from a sample of each participant's data and then applied to new test data from that participant. Index values were computed for every 4 s of task data. Across participants, mean task load index values increased systematically with increasing task difficulty and differed significantly between the different task versions. Actual or potential applications of this research include the use of multivariate EEG-based methods to monitor task loading during naturalistic computer-based work.
Community challenges in biomedical text mining over 10 years: success, failure and the future
Huang, Chung-Chi
2016-01-01
One effective way to improve the state of the art is through competitions. Following the success of the Critical Assessment of protein Structure Prediction (CASP) in bioinformatics research, a number of challenge evaluations have been organized by the text-mining research community to assess and advance natural language processing (NLP) research for biomedicine. In this article, we review the different community challenge evaluations held from 2002 to 2014 and their respective tasks. Furthermore, we examine these challenge tasks through their targeted problems in NLP research and biomedical applications, respectively. Next, we describe the general workflow of organizing a Biomedical NLP (BioNLP) challenge and involved stakeholders (task organizers, task data producers, task participants and end users). Finally, we summarize the impact and contributions by taking into account different BioNLP challenges as a whole, followed by a discussion of their limitations and difficulties. We conclude with future trends in BioNLP challenge evaluations. PMID:25935162
Task allocation in a distributed computing system
NASA Technical Reports Server (NTRS)
Seward, Walter D.
1987-01-01
A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.
Continuous robust sound event classification using time-frequency features and deep learning
Song, Yan; Xiao, Wei; Phan, Huy
2017-01-01
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478
Continuous robust sound event classification using time-frequency features and deep learning.
McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy
2017-01-01
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.
Computational Aspects of Data Assimilation and the ESMF
NASA Technical Reports Server (NTRS)
daSilva, A.
2003-01-01
The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.
Clark, David J; Chatterjee, Sudeshna A; McGuirk, Theresa E; Porges, Eric C; Fox, Emily J; Balasubramanian, Chitralakshmi K
2018-02-01
Walking adaptability tasks are challenging for people with motor impairments. The construct of perceived challenge is typically measured by self-report assessments, which are susceptible to subjective measurement error. The development of an objective physiologically-based measure of challenge may help to improve the ability to assess this important aspect of mobility function. The objective of this study to investigate the use of sympathetic nervous system (SNS) activity measured by skin conductance to gauge the physiological stress response to challenging walking adaptability tasks in people post-stroke. Thirty adults with chronic post-stroke hemiparesis performed a battery of seventeen walking adaptability tasks. SNS activity was measured by skin conductance from the palmar surface of each hand. The primary outcome variable was the percent change in skin conductance level (ΔSCL) between the baseline resting and walking phases of each task. Task difficulty was measured by performance speed and by physical therapist scoring of performance. Walking function and balance confidence were measured by preferred walking speed and the Activities-specific Balance Confidence Scale, respectively. There was a statistically significant negative association between ΔSCL and task performance speed and between ΔSCL and clinical score, indicating that tasks with greater SNS activity had slower performance speed and poorer clinical scores. ΔSCL was significantly greater for low functioning participants versus high functioning participants, particularly during the most challenging walking adaptability tasks. This study supports the use of SNS activity measured by skin conductance as a valuable approach for objectively quantifying the perceived challenge of walking adaptability tasks in people post-stroke. Published by Elsevier B.V.
Clark, David J.; Chatterjee, Sudeshna A.; McGuirk, Theresa E.; Porges, Eric C.; Fox, Emily J.; Balasubramanian, Chitralakshmi K.
2018-01-01
Background Walking adaptability tasks are challenging for people with motor impairments. The construct of perceived challenge is typically measured by self-report assessments, which are susceptible to subjective measurement error. The development of an objective physiologically-based measure of challenge may help to improve the ability to assess this important aspect of mobility function. The objective of this study to investigate the use of sympathetic nervous system (SNS) activity measured by skin conductance to gauge the physiological stress response to challenging walking adaptability tasks in people post-stroke. Methods Thirty adults with chronic post-stroke hemiparesis performed a battery of seventeen walking adaptability tasks. SNS activity was measured by skin conductance from the palmar surface of each hand. The primary outcome variable was the percent change in skin conductance level (ΔSCL) between the baseline resting and walking phases of each task. Task difficulty was measured by performance speed and by physical therapist grading of performance. Walking function and balance confidence were measured by preferred walking speed and the Activities Specific Balance Confidence Scale, respectively. Results There was a statistically significant negative association between ΔSCL and task performance speed and between ΔSCL and clinical score, indicating that tasks with greater SNS activity had slower performance speed and poorer clinical scores. ΔSCL was significantly greater for low functioning participants versus high functioning participants, particularly during the most challenging walking adaptability tasks. Conclusion This study supports the use of SNS activity measured by skin conductance as a valuable approach for objectively quantifying the perceived challenge of walking adaptability tasks in people post-stroke. PMID:29216598
Architectural design and support for knowledge sharing across heterogeneous MAST systems
NASA Astrophysics Data System (ADS)
Arkin, Ronald C.; Garcia-Vergara, Sergio; Lee, Sung G.
2012-06-01
A novel approach for the sharing of knowledge between widely heterogeneous robotic agents is presented, drawing upon Gardenfors Conceptual Spaces approach [4]. The target microrobotic platforms considered are computationally, power, sensor, and communications impoverished compared to more traditional robotics platforms due to their small size. This produces novel challenges for the system to converge on an interpretation of events within the world, in this case specifically focusing on the task of recognizing the concept of a biohazard in an indoor setting.
Playing Tic-Tac-Toe with a Sugar-Based Molecular Computer.
Elstner, M; Schiller, A
2015-08-24
Today, molecules can perform Boolean operations and circuits at a level of higher complexity. However, concatenation of logic gates and inhomogeneous inputs and outputs are still challenging tasks. Novel approaches for logic gate integration are possible when chemical programming and software programming are combined. Here it is shown that a molecular finite automaton based on the concatenated implication function (IMP) of a fluorescent two-component sugar probe via a wiring algorithm is able to play tic-tac-toe.
AnchorDock: Blind and Flexible Anchor-Driven Peptide Docking.
Ben-Shimon, Avraham; Niv, Masha Y
2015-05-05
The huge conformational space stemming from the inherent flexibility of peptides is among the main obstacles to successful and efficient computational modeling of protein-peptide interactions. Current peptide docking methods typically overcome this challenge using prior knowledge from the structure of the complex. Here we introduce AnchorDock, a peptide docking approach, which automatically targets the docking search to the most relevant parts of the conformational space. This is done by precomputing the free peptide's structure and by computationally identifying anchoring spots on the protein surface. Next, a free peptide conformation undergoes anchor-driven simulated annealing molecular dynamics simulations around the predicted anchoring spots. In the challenging task of a completely blind docking test, AnchorDock produced exceptionally good results (backbone root-mean-square deviation ≤ 2.2Å, rank ≤15) for 10 of 13 unbound cases tested. The impressive performance of AnchorDock supports a molecular recognition pathway that is driven via pre-existing local structural elements. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Striolo, Alberto; Cole, David R.
Because of a number of technological advancements, unconventional hydrocarbons, and in particular shale gas, have transformed the US economy. Much is being learned, as demonstrated by the reduced cost of extracting shale gas in the US over the past five years. However, a number of challenges still need to be addressed. Many of these challenges represent grand scientific and technological tasks, overcoming which will have a number of positive impacts, ranging from the reduction of the environmental footprint of shale gas production to improvements and leaps forward in diverse sectors, including chemical manufacturing and catalytic transformations. This review addresses recentmore » advancements in computational and experimental approaches, which led to improved understanding of, in particular, structure and transport of fluids, including hydrocarbons, electrolytes, water, and CO 2 in heterogeneous subsurface rocks such as those typically found in shale formations. Finally, the narrative is concluded with a suggestion of a few research directions that, by synergistically combining computational and experimental advances, could allow us to overcome some of the hurdles that currently hinder the production of hydrocarbons from shale formations.« less
WE-D-303-00: Computational Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John; Brigham and Women’s Hospital and Dana-Farber Cancer Institute, Boston, MA
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
Primary School Children's Collaboration: Task Presentation and Gender Issues.
ERIC Educational Resources Information Center
Fitzpatrick, Helen; Hardman, Margaret
2000-01-01
Explores the characteristics of social interaction during an English language based task in the primary classroom, and the role of the computer in structuring collaboration when compared to a non-computer mode. Explains that seven and nine year old boys and girls (n=120) completed a computer and non-computer task. (CMK)
Computational dynamic approaches for temporal omics data with applications to systems medicine.
Liang, Yulan; Kelemen, Arpad
2017-01-01
Modeling and predicting biological dynamic systems and simultaneously estimating the kinetic structural and functional parameters are extremely important in systems and computational biology. This is key for understanding the complexity of the human health, drug response, disease susceptibility and pathogenesis for systems medicine. Temporal omics data used to measure the dynamic biological systems are essentials to discover complex biological interactions and clinical mechanism and causations. However, the delineation of the possible associations and causalities of genes, proteins, metabolites, cells and other biological entities from high throughput time course omics data is challenging for which conventional experimental techniques are not suited in the big omics era. In this paper, we present various recently developed dynamic trajectory and causal network approaches for temporal omics data, which are extremely useful for those researchers who want to start working in this challenging research area. Moreover, applications to various biological systems, health conditions and disease status, and examples that summarize the state-of-the art performances depending on different specific mining tasks are presented. We critically discuss the merits, drawbacks and limitations of the approaches, and the associated main challenges for the years ahead. The most recent computing tools and software to analyze specific problem type, associated platform resources, and other potentials for the dynamic trajectory and interaction methods are also presented and discussed in detail.
A survey of GPU-based medical image computing techniques
Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming
2012-01-01
Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine. PMID:23256080
BCM: toolkit for Bayesian analysis of Computational Models using samplers.
Thijssen, Bram; Dijkstra, Tjeerd M H; Heskes, Tom; Wessels, Lodewyk F A
2016-10-21
Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.
e-Collaboration for Earth observation (E-CEO): the Cloud4SAR interferometry data challenge
NASA Astrophysics Data System (ADS)
Casu, Francesco; Manunta, Michele; Boissier, Enguerran; Brito, Fabrice; Aas, Christina; Lavender, Samantha; Ribeiro, Rita; Farres, Jordi
2014-05-01
The e-Collaboration for Earth Observation (E-CEO) project addresses the technologies and architectures needed to provide a collaborative research Platform for automating data mining and processing, and information extraction experiments. The Platform serves for the implementation of Data Challenge Contests focusing on Information Extraction for Earth Observations (EO) applications. The possibility to implement multiple processors within a Common Software Environment facilitates the validation, evaluation and transparent peer comparison among different methodologies, which is one of the main requirements rose by scientists who develop algorithms in the EO field. In this scenario, we set up a Data Challenge, referred to as Cloud4SAR (http://wiki.services.eoportal.org/tiki-index.php?page=ECEO), to foster the deployment of Interferometric SAR (InSAR) processing chains within a Cloud Computing platform. While a large variety of InSAR processing software tools are available, they require a high level of expertise and a complex user interaction to be effectively run. Computing a co-seismic interferogram or a 20-years deformation time series on a volcanic area are not easy tasks to be performed in a fully unsupervised way and/or in very short time (hours or less). Benefiting from ESA's E-CEO platform, participants can optimise algorithms on a Virtual Sandbox environment without being expert programmers, and compute results on high performing Cloud platforms. Cloud4SAR requires solving a relatively easy InSAR problem by trying to maximize the exploitation of the processing capabilities provided by a Cloud Computing infrastructure. The proposed challenge offers two different frameworks, each dedicated to participants with different skills, identified as Beginners and Experts. For both of them, the contest mainly resides in the degree of automation of the deployed algorithms, no matter which one is used, as well as in the capability of taking effective benefit from a parallel computing environment.
2013-01-01
Background Significant restriction in the ability to participate in home, work and community life results from pain, fatigue, joint damage, stiffness and reduced joint range of motion and muscle strength in people with rheumatoid arthritis or osteoarthritis of the hand. With modest evidence on the therapeutic effectiveness of conventional hand exercises, a task-oriented training program via real life object manipulations has been developed for people with arthritis. An innovative, computer-based gaming platform that allows a broad range of common objects to be seamlessly transformed into therapeutic input devices through instrumentation with a motion-sense mouse has also been designed. Personalized objects are selected to target specific training goals such as graded finger mobility, strength, endurance or fine/gross dexterous functions. The movements and object manipulation tasks that replicate common situations in everyday living will then be used to control and play any computer game, making practice challenging and engaging. Methods/Design The ongoing study is a 6-week, single-center, parallel-group, equally allocated and assessor-blinded pilot randomized controlled trial. Thirty people with rheumatoid arthritis or osteoarthritis affecting the hand will be randomized to receive either conventional hand exercises or the task-oriented training. The purpose is to determine a preliminary estimation of therapeutic effectiveness and feasibility of the task-oriented training program. Performance based and self-reported hand function, and exercise compliance are the study outcomes. Changes in outcomes (pre to post intervention) within each group will be assessed by paired Student t test or Wilcoxon signed-rank test and between groups (control versus experimental) post intervention using unpaired Student t test or Mann–Whitney U test. Discussion The study findings will inform decisions on the feasibility, safety and completion rate and will also provide preliminary data on the treatment effects of the task-oriented training compared with conventional hand exercises in people with rheumatoid arthritis or osteoarthritis of the hand. Trial registration ClinicalTrials.gov: NCT01635582 PMID:23497529
Computer system evolution requirements for autonomous checkout of exploration vehicles
NASA Technical Reports Server (NTRS)
Davis, Tom; Sklar, Mike
1991-01-01
This study, now in its third year, has had the overall objective and challenge of determining the needed hooks and scars in the initial Space Station Freedom (SSF) system to assure that on-orbit assembly and refurbishment of lunar and Mars spacecraft can be accomplished with the maximum use of automation. In this study automation is all encompassing and includes physical tasks such as parts mating, tool operation, and human visual inspection, as well as non-physical tasks such as monitoring and diagnosis, planning and scheduling, and autonomous visual inspection. Potential tasks for automation include both extravehicular activity (EVA) and intravehicular activity (IVA) events. A number of specific techniques and tools have been developed to determine the ideal tasks to be automated, and the resulting timelines, changes in labor requirements and resources required. The Mars/Phobos exploratory mission developed in FY89, and the Lunar Assembly/Refurbishment mission developed in FY90 and depicted in the 90 Day Study as Option 5, have been analyzed in detailed in recent years. The complete methodology and results are presented in FY89 and FY90 final reports.
DOE Office of Scientific and Technical Information (OSTI.GOV)
GEL, Aytekin; Jiao, Yang; Emady, Heather
Two major challenges hinder the effective use and adoption of multiphase computational fluid dynamics tools by the industry. The first is the need for significant computational resources, which is inversely proportional to the accuracy of solutions due to computational intensity of the algorithms. The second barrier is assessing the prediction credibility and confidence in the simulation results. In this project, a multi-tiered approach has been proposed under four broad activities to overcome these challenges while addressing all of the objectives outlined in FOA-0001238 through Phases 1 and 2 of the project. The present report consists of the results for onlymore » Phase 1, which was the funded performance period. From the start the project, all of the objectives outlined in FOA were addressed through four major activity tasks in an integrated and balanced fashion to improve adoption of MFIX suite of solvers for industrial use. The first task aimed to improve the performance of MFIX-DEM specifically targeting to acquire the peak performance on Intel Xeon and Xeon Phi based systems, which are expected to be one of the primary high-performance computing platforms both affordable and available for the industrial users in the next two to five years. However, due to a number of changes in course of the project, the scope of the performance improvements related task was significantly reduced to avoid duplicate work. Hence, more emphasis was placed on the other three tasks as discussed below.The second task aimed at physical modeling enhancements through implementation of polydispersity capability and validation of heat transfer models in MFIX. An extended verification and validation (V&V) study was performed for the new polydispersity feature implemented in MFIX-DEM both for granular and coupled gas-solid flows. The features of the polydispersity capability and results for an industrially relevant problem were disseminated through journal papers (one published and one under review at the time of writing of the final technical report). As part of the validation efforts, another industrially relevant problem of interest based on rotary drums was studied for several modes of heat transfer and results were presented in conferences. Third task was aimed towards an important and unique contribution of the project, which was to develop a unified uncertainty quantification framework by integrating MFIX-DEM with a graphical user interface (GUI) driven uncertainty quantification (UQ) engine, i.e., MFIX-GUI and PSUADE. The goal was to enable a user with only modest knowledge of statistics to effectively utilize the UQ framework offered with MFIX-DEM Phi to perform UQ analysis routinely. For Phase 1, a proof-of-concept demonstration of the proposed framework was completed and shared. Direct industry involvement was one of the key virtues of this project, which was performed through forth task. For this purpose, even at the proposal stage, the project team received strong interest in the proposed capabilities from two major corporations, which were further expanded throughout Phase 1 and a new collaboration with another major corporation from chemical industry was also initiated. The level of interest received and continued collaboration for the project during Phase 1 clearly shows the relevance and potential impact of the project for the industrial users.« less
NASA Astrophysics Data System (ADS)
Kergosien, Yannick L.; Racoceanu, Daniel
2017-11-01
This article presents our vision about the next generation of challenges in computational/digital pathology. The key role of the domain ontology, developed in a sustainable manner (i.e. using reference checklists and protocols, as the living semantic repositories), opens the way to effective/sustainable traceability and relevance feedback concerning the use of existing machine learning algorithms, proven to be very performant in the latest digital pathology challenges (i.e. convolutional neural networks). Being able to work in an accessible web-service environment, with strictly controlled issues regarding intellectual property (image and data processing/analysis algorithms) and medical data/image confidentiality is essential for the future. Among the web-services involved in the proposed approach, the living yellow pages in the area of computational pathology seems to be very important in order to reach an operational awareness, validation, and feasibility. This represents a very promising way to go to the next generation of tools, able to bring more guidance to the computer scientists and confidence to the pathologists, towards an effective/efficient daily use. Besides, a consistent feedback and insights will be more likely to emerge in the near future - from these sophisticated machine learning tools - back to the pathologists-, strengthening, therefore, the interaction between the different actors of a sustainable biomedical ecosystem (patients, clinicians, biologists, engineers, scientists etc.). Beside going digital/computational - with virtual slide technology demanding new workflows-, Pathology must prepare for another coming revolution: semantic web technologies now enable the knowledge of experts to be stored in databases, shared through the Internet, and accessible by machines. Traceability, disambiguation of reports, quality monitoring, interoperability between health centers are some of the associated benefits that pathologists were seeking. However, major changes are also to be expected for the relation of human diagnosis to machine based procedures. Improving on a former imaging platform which used a local knowledge base and a reasoning engine to combine image processing modules into higher level tasks, we propose a framework where different actors of the histopathology imaging world can cooperate using web services - exchanging knowledge as well as imaging services - and where the results of such collaborations on diagnostic related tasks can be evaluated in international challenges such as those recently organized for mitosis detection, nuclear atypia, or tissue architecture in the context of cancer grading. This framework is likely to offer an effective context-guidance and traceability to Deep Learning approaches, with an interesting promising perspective given by the multi-task learning (MTL) paradigm, distinguished by its applicability to several different learning algorithms, its non- reliance on specialized architectures and the promising results demonstrated, in particular towards the problem of weak supervision-, an issue found when direct links from pathology terms in reports to corresponding regions within images are missing.
Practical considerations in experimental computational sensing
NASA Astrophysics Data System (ADS)
Poon, Phillip K.
Computational sensing has demonstrated the ability to ameliorate or eliminate many trade-offs in traditional sensors. Rather than attempting to form a perfect image, then sampling at the Nyquist rate, and reconstructing the signal of interest prior to post-processing, the computational sensor attempts to utilize a priori knowledge, active or passive coding of the signal-of-interest combined with a variety of algorithms to overcome the trade-offs or to improve various task-specific metrics. While it is a powerful approach to radically new sensor architectures, published research tends to focus on architecture concepts and positive results. Little attention is given towards the practical issues when faced with implementing computational sensing prototypes. I will discuss the various practical challenges that I encountered while developing three separate applications of computational sensors. The first is a compressive sensing based object tracking camera, the SCOUT, which exploits the sparsity of motion between consecutive frames while using no moving parts to create a psuedo-random shift variant point-spread function. The second is a spectral imaging camera, the AFSSI-C, which uses a modified version of Principal Component Analysis with a Bayesian strategy to adaptively design spectral filters for direct spectral classification using a digital micro-mirror device (DMD) based architecture. The third demonstrates two separate architectures to perform spectral unmixing by using an adaptive algorithm or a hybrid techniques of using Maximum Noise Fraction and random filter selection from a liquid crystal on silicon based computational spectral imager, the LCSI. All of these applications demonstrate a variety of challenges that have been addressed or continue to challenge the computational sensing community. One issue is calibration, since many computational sensors require an inversion step and in the case of compressive sensing, lack of redundancy in the measurement data. Another issue is over multiplexing, as more light is collected per sample, the finite amount of dynamic range and quantization resolution can begin to degrade the recovery of the relevant information. A priori knowledge of the sparsity and or other statistics of the signal or noise is often used by computational sensors to outperform their isomorphic counterparts. This is demonstrated in all three of the sensors I have developed. These challenges and others will be discussed using a case-study approach through these three applications.
A Highly Capable Year 6 Student's Response to a Challenging Mathematical Task
ERIC Educational Resources Information Center
Livy, Sharyn; Holmes, Marilyn; Ingram, Naomi; Linsell, Chris; Sullivan, Peter
2016-01-01
Highly capable mathematics students are not usually considered strugglers. This paper reports on a case study of a Year 6 student, Debbie, her response to a lesson, and her learning involving a challenging mathematical task. Debbie, usually a highly capable student, struggled to complete a challenging mathematical task by herself, but as the…
Task allocation model for minimization of completion time in distributed computer systems
NASA Astrophysics Data System (ADS)
Wang, Jai-Ping; Steidley, Carl W.
1993-08-01
A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.
Community challenges in biomedical text mining over 10 years: success, failure and the future.
Huang, Chung-Chi; Lu, Zhiyong
2016-01-01
One effective way to improve the state of the art is through competitions. Following the success of the Critical Assessment of protein Structure Prediction (CASP) in bioinformatics research, a number of challenge evaluations have been organized by the text-mining research community to assess and advance natural language processing (NLP) research for biomedicine. In this article, we review the different community challenge evaluations held from 2002 to 2014 and their respective tasks. Furthermore, we examine these challenge tasks through their targeted problems in NLP research and biomedical applications, respectively. Next, we describe the general workflow of organizing a Biomedical NLP (BioNLP) challenge and involved stakeholders (task organizers, task data producers, task participants and end users). Finally, we summarize the impact and contributions by taking into account different BioNLP challenges as a whole, followed by a discussion of their limitations and difficulties. We conclude with future trends in BioNLP challenge evaluations. Published by Oxford University Press 2015. This work is written by US Government employees and is in the public domain in the US.
Learning the Task Management Space of an Aircraft Approach Model
NASA Technical Reports Server (NTRS)
Krall, Joseph; Menzies, Tim; Davies, Misty
2014-01-01
Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange.
Hula, Andreas; Montague, P Read; Dayan, Peter
2015-06-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent's preference for equity with their partner, beliefs about the partner's appetite for equity, beliefs about the partner's model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference.
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange
Hula, Andreas; Montague, P. Read; Dayan, Peter
2015-01-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent’s preference for equity with their partner, beliefs about the partner’s appetite for equity, beliefs about the partner’s model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429
Contributions of TetrUSS to Project Orion
NASA Technical Reports Server (NTRS)
Mcmillin, Susan N.; Frink, Neal T.; Kerimo, Johannes; Ding, Djiang; Nayani, Sudheer; Parlette, Edward B.
2011-01-01
The NASA Constellation program has relied heavily on Computational Fluid Dynamics simulations for generating aerodynamic databases and design loads. The Orion Project focuses on the Orion Crew Module and the Orion Launch Abort Vehicle. NASA TetrUSS codes (GridTool/VGRID/USM3D) have been applied in a supporting role to the Crew Exploration Vehicle Aerosciences Project for investigating various aerodynamic sensitivities and supplementing the aerodynamic database. This paper provides an overview of the contributions from the TetrUSS team to the Project Orion Crew Module and Launch Abort Vehicle aerodynamics, along with selected examples to highlight the challenges encountered along the way. A brief description of geometries and tasks will be discussed followed by a description of the flow solution process that produced production level computational solutions. Four tasks conducted by the USM3D team will be discussed to show how USM3D provided aerodynamic data for inclusion in the Orion aero-database, contributed data for the build-up of aerodynamic uncertainties for the aero-database, and provided insight into the flow features about the Crew Module and the Launch Abort Vehicle.
Durkin, Kevin; Conti-Ramsden, Gina
2012-01-01
Computer use draws on linguistic abilities. Using this medium thus presents challenges for young people with Specific Language Impairment (SLI) and raises questions of whether computer-based tasks are appropriate for them. We consider theoretical arguments predicting impaired performance and negative outcomes relative to peers without SLI versus the possibility of positive gains. We examine the relationship between frequency of computer use (for leisure and educational purposes) and educational achievement; in particular examination performance at the end of compulsory education and level of educational progress two years later. Participants were 49 young people with SLI and 56 typically developing (TD) young people. At around age 17, the two groups did not differ in frequency of educational computer use or leisure computer use. There were no associations between computer use and educational outcomes in the TD group. In the SLI group, after PIQ was controlled for, educational computer use at around 17 years of age contributed substantially to the prediction of educational progress at 19 years. The findings suggest that educational uses of computers are conducive to educational progress in young people with SLI. PMID:23300610
Development of an information retrieval tool for biomedical patents.
Alves, Tiago; Rodrigues, Rúben; Costa, Hugo; Rocha, Miguel
2018-06-01
The volume of biomedical literature has been increasing in the last years. Patent documents have also followed this trend, being important sources of biomedical knowledge, technical details and curated data, which are put together along the granting process. The field of Biomedical text mining (BioTM) has been creating solutions for the problems posed by the unstructured nature of natural language, which makes the search of information a challenging task. Several BioTM techniques can be applied to patents. From those, Information Retrieval (IR) includes processes where relevant data are obtained from collections of documents. In this work, the main goal was to build a patent pipeline addressing IR tasks over patent repositories to make these documents amenable to BioTM tasks. The pipeline was developed within @Note2, an open-source computational framework for BioTM, adding a number of modules to the core libraries, including patent metadata and full text retrieval, PDF to text conversion and optical character recognition. Also, user interfaces were developed for the main operations materialized in a new @Note2 plug-in. The integration of these tools in @Note2 opens opportunities to run BioTM tools over patent texts, including tasks from Information Extraction, such as Named Entity Recognition or Relation Extraction. We demonstrated the pipeline's main functions with a case study, using an available benchmark dataset from BioCreative challenges. Also, we show the use of the plug-in with a user query related to the production of vanillin. This work makes available all the relevant content from patents to the scientific community, decreasing drastically the time required for this task, and provides graphical interfaces to ease the use of these tools. Copyright © 2018 Elsevier B.V. All rights reserved.
WLCG Monitoring Consolidation and further evolution
NASA Astrophysics Data System (ADS)
Saiz, P.; Aimar, A.; Andreeva, J.; Babik, M.; Cons, L.; Dzhunov, I.; Forti, A.; di Girolamo, A.; Karavakis, E.; Litmaath, M.; Magini, N.; Magnoni, L.; de los Rios, H. Martin; Roiser, S.; Sciaba, A.; Schulz, M.; Tarragon, J.; Tuckett, D.
2015-12-01
The WLCG monitoring system solves a challenging task of keeping track of the LHC computing activities on the WLCG infrastructure, ensuring health and performance of the distributed services at more than 170 sites. The challenge consists of decreasing the effort needed to operate the monitoring service and to satisfy the constantly growing requirements for its scalability and performance. This contribution describes the recent consolidation work aimed to reduce the complexity of the system, and to ensure more effective operations, support and service management. This was done by unifying where possible the implementation of the monitoring components. The contribution also covers further steps like the evaluation of the new technologies for data storage, processing and visualization and migration to a new technology stack.
Evaluation of Ground Vibrations Induced by Military Noise Sources
2006-08-01
1 Task 2—Determine the acoustic -to-seismic coupling coefficients C1 and C2 ...................... 1 Task 3—Computational modeling ...Determine the acoustic -to-seismic coupling coefficients C1 and C2 ....................45 Task 3—Computational modeling of acoustically induced ground...ground conditions. Task 3—Computational modeling of acoustically induced ground motion The simple model of blast sound interaction with the
Deep learning for computational chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav
The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. Inmore » this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.« less
Computer task performance by subjects with Duchenne muscular dystrophy.
Malheiros, Silvia Regina Pinheiro; da Silva, Talita Dias; Favero, Francis Meire; de Abreu, Luiz Carlos; Fregni, Felipe; Ribeiro, Denise Cardoso; de Mello Monteiro, Carlos Bandeira
2016-01-01
Two specific objectives were established to quantify computer task performance among people with Duchenne muscular dystrophy (DMD). First, we compared simple computational task performance between subjects with DMD and age-matched typically developing (TD) subjects. Second, we examined correlations between the ability of subjects with DMD to learn the computational task and their motor functionality, age, and initial task performance. The study included 84 individuals (42 with DMD, mean age of 18±5.5 years, and 42 age-matched controls). They executed a computer maze task; all participants performed the acquisition (20 attempts) and retention (five attempts) phases, repeating the same maze. A different maze was used to verify transfer performance (five attempts). The Motor Function Measure Scale was applied, and the results were compared with maze task performance. In the acquisition phase, a significant decrease was found in movement time (MT) between the first and last acquisition block, but only for the DMD group. For the DMD group, MT during transfer was shorter than during the first acquisition block, indicating improvement from the first acquisition block to transfer. In addition, the TD group showed shorter MT than the DMD group across the study. DMD participants improved their performance after practicing a computational task; however, the difference in MT was present in all attempts among DMD and control subjects. Computational task improvement was positively influenced by the initial performance of individuals with DMD. In turn, the initial performance was influenced by their distal functionality but not their age or overall functionality.
Characterization of task-free and task-performance brain states via functional connectome patterns.
Zhang, Xin; Guo, Lei; Li, Xiang; Zhang, Tuo; Zhu, Dajiang; Li, Kaiming; Chen, Hanbo; Lv, Jinglei; Jin, Changfeng; Zhao, Qun; Li, Lingjiang; Liu, Tianming
2013-12-01
Both resting state fMRI (R-fMRI) and task-based fMRI (T-fMRI) have been widely used to study the functional activities of the human brain during task-free and task-performance periods, respectively. However, due to the difficulty in strictly controlling the participating subject's mental status and their cognitive behaviors during R-fMRI/T-fMRI scans, it has been challenging to ascertain whether or not an R-fMRI/T-fMRI scan truly reflects the participant's functional brain states during task-free/task-performance periods. This paper presents a novel computational approach to characterizing and differentiating the brain's functional status into task-free or task-performance states, by which the functional brain activities can be effectively understood and differentiated. Briefly, the brain's functional state is represented by a whole-brain quasi-stable connectome pattern (WQCP) of R-fMRI or T-fMRI data based on 358 consistent cortical landmarks across individuals, and then an effective sparse representation method was applied to learn the atomic connectome patterns (ACPs) of both task-free and task-performance states. Experimental results demonstrated that the learned ACPs for R-fMRI and T-fMRI datasets are substantially different, as expected. A certain portion of ACPs from R-fMRI and T-fMRI data were overlapped, suggesting some subjects with overlapping ACPs were not in the expected task-free/task-performance brain states. Besides, potential outliers in the T-fMRI dataset were further investigated via functional activation detections in different groups, and our results revealed unexpected task-performances of some subjects. This work offers novel insights into the functional architectures of the brain. Copyright © 2013 Elsevier B.V. All rights reserved.
Characterization of Task-free and Task-performance Brain States via Functional Connectome Patterns
Zhang, Xin; Guo, Lei; Li, Xiang; Zhang, Tuo; Zhu, Dajiang; Li, Kaiming; Chen, Hanbo; Lv, Jinglei; Jin, Changfeng; Zhao, Qun; Li, Lingjiang; Liu, Tianming
2014-01-01
Both resting state fMRI (R-fMRI) and task-based fMRI (T-fMRI) have been widely used to study the functional activities of the human brain during task-free and task-performance periods, respectively. However, due to the difficulty in strictly controlling the participating subject's mental status and their cognitive behaviors during R-fMRI/T-fMRI scans, it has been challenging to ascertain whether or not an R-fMRI/T-fMRI scan truly reflects the participant's functional brain states during task-free/task-performance periods. This paper presents a novel computational approach to characterizing and differentiating the brain's functional status into task-free or task-performance states, by which the functional brain activities can be effectively understood and differentiated. Briefly, the brain's functional state is represented by a whole-brain quasi-stable connectome pattern (WQCP) of R-fMRI or T-fMRI data based on 358 consistent cortical landmarks across individuals, and then an effective sparse representation method was applied to learn the atomic connectome patterns (ACP) of both task-free and task-performance states. Experimental results demonstrated that the learned ACPs for R-fMRI and T-fMRI datasets are substantially different, as expected. A certain portion of ACPs from R-fMRI and T-fMRI data were overlapped, suggesting some subjects with overlapping ACPs were not in the expected task-free/task-performance brain states. Besides, potential outliers in the T-fMRI dataset were further investigated via functional activation detections in different groups, and our results revealed unexpected task-performances of some subjects. This work offers novel insights into the functional architectures of the brain. PMID:23938590
Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation.
Kamnitsas, Konstantinos; Ledig, Christian; Newcombe, Virginia F J; Simpson, Joanna P; Kane, Andrew D; Menon, David K; Rueckert, Daniel; Glocker, Ben
2017-02-01
We propose a dual pathway, 11-layers deep, three-dimensional Convolutional Neural Network for the challenging task of brain lesion segmentation. The devised architecture is the result of an in-depth analysis of the limitations of current networks proposed for similar applications. To overcome the computational burden of processing 3D medical scans, we have devised an efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data. Further, we analyze the development of deeper, thus more discriminative 3D CNNs. In order to incorporate both local and larger contextual information, we employ a dual pathway architecture that processes the input images at multiple scales simultaneously. For post-processing of the network's soft segmentation, we use a 3D fully connected Conditional Random Field which effectively removes false positives. Our pipeline is extensively evaluated on three challenging tasks of lesion segmentation in multi-channel MRI patient data with traumatic brain injuries, brain tumours, and ischemic stroke. We improve on the state-of-the-art for all three applications, with top ranking performance on the public benchmarks BRATS 2015 and ISLES 2015. Our method is computationally efficient, which allows its adoption in a variety of research and clinical settings. The source code of our implementation is made publicly available. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Reinforcement learning and episodic memory in humans and animals: an integrative framework
Gershman, Samuel J.; Daw, Nathaniel D.
2018-01-01
We review the psychology and neuroscience of reinforcement learning (RL), which has witnessed significant progress in the last two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, the simplicity of these tasks misses important aspects of reinforcement learning in the real world: (i) State spaces are high-dimensional, continuous, and partially observable; this implies that (ii) data are relatively sparse: indeed precisely the same situation may never be encountered twice; and also that (iii) rewards depend on long-term consequences of actions in ways that violate the classical assumptions that make RL tractable. A seemingly distinct challenge is that, cognitively, these theories have largely connected with procedural and semantic memory: how knowledge about action values or world models extracted gradually from many experiences can drive choice. This misses many aspects of memory related to traces of individual events, such as episodic memory. We suggest that these two gaps are related. In particular, the computational challenges can be dealt with, in part, by endowing RL systems with episodic memory, allowing them to (i) efficiently approximate value functions over complex state spaces, (ii) learn with very little data, and (iii) bridge long-term dependencies between actions and rewards. We review the computational theory underlying this proposal and the empirical evidence to support it. Our proposal suggests that the ubiquitous and diverse roles of memory in RL may function as part of an integrated learning system. PMID:27618944
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505
ERIC Educational Resources Information Center
Inoue, Noriyuki
2007-01-01
In a task choice situation, why do some students spontaneously choose challenging tasks while others do not? In the study, 114 undergraduate students were first asked of their perceived competence and interest in solving number puzzles at both individual and situational levels, and then asked to choose one puzzle from four difficulty levels. They…
Attentional Focus Effects as a Function of Task Difficulty
ERIC Educational Resources Information Center
Wulf, Gabriele; Tollner, Thomas; Shea, Charles H.
2007-01-01
The purpose of the present study was to examine whether the advantages of adopting an external focus would be seen primarily for relatively challenging (postural stability) tasks but not less demanding tasks. To examine this, the authors used balance tasks that imposed increased challenges to maintaining stability. The present results support the…
Accomplishments and challenges of surgical simulation.
Satava, R M
2001-03-01
For nearly a decade, advanced computer technologies have created extraordinary educational tools using three-dimensional (3D) visualization and virtual reality. Pioneering efforts in surgical simulation with these tools have resulted in a first generation of simulators for surgical technical skills. Accomplishments include simulations with 3D models of anatomy for practice of surgical tasks, initial assessment of student performance in technical skills, and awareness by professional societies of potential in surgical education and certification. However, enormous challenges remain, which include improvement of technical fidelity, standardization of accurate metrics for performance evaluation, integration of simulators into a robust educational curriculum, stringent evaluation of simulators for effectiveness and value added to surgical training, determination of simulation application to certification of surgical technical skills, and a business model to implement and disseminate simulation successfully throughout the medical education community. This review looks at the historical progress of surgical simulators, their accomplishments, and the challenges that remain.
Dettmer, Marius; Pourmoghaddam, Amir; Lee, Beom-Chan; Layne, Charles S
2015-01-01
Postural control in certain situations depends on functioning of tactile or proprioceptive receptors and their respective dynamic integration. Loss of sensory functioning can lead to increased risk of falls in challenging postural tasks, especially in older adults. Stochastic resonance, a concept describing better function of systems with addition of optimal levels of noise, has shown to be beneficial for balance performance in certain populations and simple postural tasks. In this study, we tested the effects of aging and a tactile stochastic resonance stimulus (TSRS) on balance of adults in a sensory conflict task. Nineteen older (71-84 years of age) and younger participants (22-29 years of age) stood on a force plate for repeated trials of 20 s duration, while foot sole stimulation was either turned on or off, and the visual surrounding was sway-referenced. Balance performance was evaluated by computing an Equilibrium Score (ES) and anterior-posterior sway path length (APPlength). For postural control evaluation, strategy scores and approximate entropy (ApEn) were computed. Repeated-measures ANOVA, Wilcoxon signed-rank tests, and Mann-Whitney U-tests were conducted for statistical analysis. Our results showed that balance performance differed between older and younger adults as indicated by ES (p = 0.01) and APPlength (0.01), and addition of vibration only improved performance in the older group significantly (p = 0.012). Strategy scores differed between both age groups, whereas vibration only affected the older group (p = 0.025). Our results indicate that aging affects specific postural outcomes and that TSRS is beneficial for older adults in a visual sensory conflict task, but more research is needed to investigate the effectiveness in individuals with more severe balance problems, for example, due to neuropathy.
Three-dimensional rendering of segmented object using matlab - biomed 2010.
Anderson, Jeffrey R; Barrett, Steven F
2010-01-01
The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.
CombiMotif: A new algorithm for network motifs discovery in protein-protein interaction networks
NASA Astrophysics Data System (ADS)
Luo, Jiawei; Li, Guanghui; Song, Dan; Liang, Cheng
2014-12-01
Discovering motifs in protein-protein interaction networks is becoming a current major challenge in computational biology, since the distribution of the number of network motifs can reveal significant systemic differences among species. However, this task can be computationally expensive because of the involvement of graph isomorphic detection. In this paper, we present a new algorithm (CombiMotif) that incorporates combinatorial techniques to count non-induced occurrences of subgraph topologies in the form of trees. The efficiency of our algorithm is demonstrated by comparing the obtained results with the current state-of-the art subgraph counting algorithms. We also show major differences between unicellular and multicellular organisms. The datasets and source code of CombiMotif are freely available upon request.
Economic development evaluation based on science and patents
NASA Astrophysics Data System (ADS)
Jokanović, Bojana; Lalic, Bojan; Milovančević, Miloš; Simeunović, Nenad; Marković, Dusan
2017-09-01
Economic development could be achieved through many factors. Science and technology factors could influence economic development drastically. Therefore the main aim in this study was to apply computational intelligence methodology, artificial neural network approach, for economic development estimation based on different science and technology factors. Since economic analyzing could be very challenging task because of high nonlinearity, in this study was applied computational intelligence methodology, artificial neural network approach, to estimate the economic development based on different science and technology factors. As economic development measure, gross domestic product (GDP) was used. As the science and technology factors, patents in different field were used. It was found that the patents in electrical engineering field have the highest influence on the economic development or the GDP.
Energy Efficiency in Public Buildings through Context-Aware Social Computing.
García, Óscar; Alonso, Ricardo S; Prieto, Javier; Corchado, Juan M
2017-04-11
The challenge of promoting behavioral changes in users that leads to energy savings in public buildings has become a complex task requiring the involvement of multiple technologies. Wireless sensor networks have a great potential for the development of tools, such as serious games, that encourage acquiring good energy and healthy habits among users in the workplace. This paper presents the development of a serious game using CAFCLA, a framework that allows for integrating multiple technologies, which provide both context-awareness and social computing. Game development has shown that the data provided by sensor networks encourage users to reduce energy consumption in their workplace and that social interactions and competitiveness allow for accelerating the achievement of good results and behavioral changes that favor energy savings.
MPI, HPF or OpenMP: A Study with the NAS Benchmarks
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Hribar, Michelle; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1999-01-01
Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but the task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study,potentials of applying some of the techniques to realistic aerospace applications will be presented
MPI, HPF or OpenMP: A Study with the NAS Benchmarks
NASA Technical Reports Server (NTRS)
Jin, H.; Frumkin, M.; Hribar, M.; Waheed, A.; Yan, J.; Saini, Subhash (Technical Monitor)
1999-01-01
Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but this task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study, we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study, potentials of applying some of the techniques to realistic aerospace applications will be presented.
Asah, Flora
2013-04-01
This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.
A characterization of workflow management systems for extreme-scale applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
A characterization of workflow management systems for extreme-scale applications
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...
2017-02-16
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
ERIC Educational Resources Information Center
Goldhammer, Frank; Naumann, Johannes; Stelter, Annette; Tóth, Krisztina; Rölke, Heiko; Klieme, Eckhard
2014-01-01
Computer-based assessment can provide new insights into behavioral processes of task completion that cannot be uncovered by paper-based instruments. Time presents a major characteristic of the task completion process. Psychologically, time on task has 2 different interpretations, suggesting opposing associations with task outcome: Spending more…
NASA Technical Reports Server (NTRS)
Karpoukhin, Mikhii G.; Kogan, Boris Y.; Karplus, Walter J.
1995-01-01
The simulation of heart arrhythmia and fibrillation are very important and challenging tasks. The solution of these problems using sophisticated mathematical models is beyond the capabilities of modern super computers. To overcome these difficulties it is proposed to break the whole simulation problem into two tightly coupled stages: generation of the action potential using sophisticated models. and propagation of the action potential using simplified models. The well known simplified models are compared and modified to bring the rate of depolarization and action potential duration restitution closer to reality. The modified method of lines is used to parallelize the computational process. The conditions for the appearance of 2D spiral waves after the application of a premature beat and the subsequent traveling of the spiral wave inside the simulated tissue are studied.
Information Architecture for Quality Management Support in Hospitals.
Rocha, Álvaro; Freixo, Jorge
2015-10-01
Quality Management occupies a strategic role in organizations, and the adoption of computer tools within an aligned information architecture facilitates the challenge of making more with less, promoting the development of a competitive edge and sustainability. A formal Information Architecture (IA) lends organizations an enhanced knowledge but, above all, favours management. This simplifies the reinvention of processes, the reformulation of procedures, bridging and the cooperation amongst the multiple actors of an organization. In the present investigation work we planned the IA for the Quality Management System (QMS) of a Hospital, which allowed us to develop and implement the QUALITUS (QUALITUS, name of the computer application developed to support Quality Management in a Hospital Unit) computer application. This solution translated itself in significant gains for the Hospital Unit under study, accelerating the quality management process and reducing the tasks, the number of documents, the information to be filled in and information errors, amongst others.
An interactive parallel programming environment applied in atmospheric science
NASA Technical Reports Server (NTRS)
vonLaszewski, G.
1996-01-01
This article introduces an interactive parallel programming environment (IPPE) that simplifies the generation and execution of parallel programs. One of the tasks of the environment is to generate message-passing parallel programs for homogeneous and heterogeneous computing platforms. The parallel programs are represented by using visual objects. This is accomplished with the help of a graphical programming editor that is implemented in Java and enables portability to a wide variety of computer platforms. In contrast to other graphical programming systems, reusable parts of the programs can be stored in a program library to support rapid prototyping. In addition, runtime performance data on different computing platforms is collected in a database. A selection process determines dynamically the software and the hardware platform to be used to solve the problem in minimal wall-clock time. The environment is currently being tested on a Grand Challenge problem, the NASA four-dimensional data assimilation system.
A knowledge-based approach to automated flow-field zoning for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Vogel, Alison Andrews
1989-01-01
An automated three-dimensional zonal grid generation capability for computational fluid dynamics is shown through the development of a demonstration computer program capable of automatically zoning the flow field of representative two-dimensional (2-D) aerodynamic configurations. The applicability of a knowledge-based programming approach to the domain of flow-field zoning is examined. Several aspects of flow-field zoning make the application of knowledge-based techniques challenging: the need for perceptual information, the role of individual bias in the design and evaluation of zonings, and the fact that the zoning process is modeled as a constructive, design-type task (for which there are relatively few examples of successful knowledge-based systems in any domain). Engineering solutions to the problems arising from these aspects are developed, and a demonstration system is implemented which can design, generate, and output flow-field zonings for representative 2-D aerodynamic configurations.
Design issues for grid-connected photovoltaic systems
NASA Astrophysics Data System (ADS)
Ropp, Michael Eugene
1998-08-01
Photovoltaics (PV) is the direct conversion of sunlight to electrical energy. In areas without centralized utility grids, the benefits of PV easily overshadow the present shortcomings of the technology. However, in locations with centralized utility systems, significant technical challenges remain before utility-interactive PV (UIPV) systems can be integrated into the mix of electricity sources. One challenge is that the needed computer design tools for optimal design of PV systems with curved PV arrays are not available, and even those that are available do not facilitate monitoring of the system once it is built. Another arises from the issue of islanding. Islanding occurs when a UIPV system continues to energize a section of a utility system after that section has been isolated from the utility voltage source. Islanding, which is potentially dangerous to both personnel and equipment, is difficult to prevent completely. The work contained within this thesis targets both of these technical challenges. In Task 1, a method for modeling a PV system with a curved PV array using only existing computer software is developed. This methodology also facilitates comparison of measured and modeled data for use in system monitoring. The procedure is applied to the Georgia Tech Aquatic Center (GTAC) FV system. In the work contained under Task 2, islanding prevention is considered. The existing state-of-the- art is thoroughly reviewed. In Subtask 2.1, an analysis is performed which suggests that standard protective relays are in fact insufficient to guarantee protection against islanding. In Subtask 2.2. several existing islanding prevention methods are compared in a novel way. The superiority of this new comparison over those used previously is demonstrated. A new islanding prevention method is the subject under Subtask 2.3. It is shown that it does not compare favorably with other existing techniques. However, in Subtask 2.4, a novel method for dramatically improving this new islanding prevention method is described. It is shown, both by computer modeling and experiment, that this new method is one of the most effective available today. Finally, under Subtask 2.5, the effects of certain types of loads; on the effectiveness of islanding prevention methods are discussed.
Customer and household matching: resolving entity identity in data warehouses
NASA Astrophysics Data System (ADS)
Berndt, Donald J.; Satterfield, Ronald K.
2000-04-01
The data preparation and cleansing tasks necessary to ensure high quality data are among the most difficult challenges faced in data warehousing and data mining projects. The extraction of source data, transformation into new forms, and loading into a data warehouse environment are all time consuming tasks that can be supported by methodologies and tools. This paper focuses on the problem of record linkage or entity matching, tasks that can be very important in providing high quality data. Merging two or more large databases into a single integrated system is a difficult problem in many industries, especially in the wake of acquisitions. For example, managing customer lists can be challenging when duplicate entries, data entry problems, and changing information conspire to make data quality an elusive target. Common tasks with regard to customer lists include customer matching to reduce duplicate entries and household matching to group customers. These often O(n2) problems can consume significant resources, both in computing infrastructure and human oversight, and the goal of high accuracy in the final integrated database can be difficult to assure. This paper distinguishes between attribute corruption and entity corruption, discussing the various impacts on quality. A metajoin operator is proposed and used to organize past and current entity matching techniques. Finally, a logistic regression approach to implementing the metajoin operator is discussed and illustrated with an example. The metajoin can be used to determine whether two records match, don't match, or require further evaluation by human experts. Properly implemented, the metajoin operator could allow the integration of individual databases with greater accuracy and lower cost.
The Effects of Study Tasks in a Computer-Based Chemistry Learning Environment
ERIC Educational Resources Information Center
Urhahne, Detlef; Nick, Sabine; Poepping, Anna Christin; Schulz , Sarah Jayne
2013-01-01
The present study examines the effects of different study tasks on the acquisition of knowledge about acids and bases in a computer-based learning environment. Three different task formats were selected to create three treatment conditions: learning with gap-fill and matching tasks, learning with multiple-choice tasks, and learning only from text…
Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun
2012-01-01
Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.
Face Recognition in Humans and Machines
NASA Astrophysics Data System (ADS)
O'Toole, Alice; Tistarelli, Massimo
The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.
A Conceptual Model for Analysing Collaborative Work and Products in Groupware Systems
NASA Astrophysics Data System (ADS)
Duque, Rafael; Bravo, Crescencio; Ortega, Manuel
Collaborative work using groupware systems is a dynamic process in which many tasks, in different application domains, are carried out. Currently, one of the biggest challenges in the field of CSCW (Computer-Supported Cooperative Work) research is to establish conceptual models which allow for the analysis of collaborative activities and their resulting products. In this article, we propose an ontology that conceptualizes the required elements which enable an analysis to infer a set of analysis indicators, thus evaluating both the individual and group work and the artefacts which are produced.
Method and system for benchmarking computers
Gustafson, John L.
1993-09-14
A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.
Checkpointing for a hybrid computing node
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cher, Chen-Yong
2016-03-08
According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.
Lunkenheimer, Erika; Kemp, Christine J.; Lucas-Thompson, Rachel G.; Cole, Pamela M.; Albrecht, Erin C.
2016-01-01
Researchers have argued for more dynamic and contextually relevant measures of regulatory processes in interpersonal interactions. In response, we introduce and examine the effectiveness of a new task, the Parent-Child Challenge Task, designed to assess the self-regulation and coregulation of affect, goal-directed behavior, and physiology in parents and their preschoolers in response to an experimental perturbation. Concurrent and predictive validity was examined via relations with children’s externalizing behaviors. Mothers used only their words to guide their 3-year-old children to complete increasingly difficult puzzles in order to win a prize (N = 96). A challenge condition was initiated mid-way through the task with a newly introduced time limit. The challenge produced decreases in parental teaching and dyadic behavioral variability and increases in child negative affect and dyadic affective variability, measured by dynamic systems-based methods. Children rated lower on externalizing showed respiratory sinus arrhythmia (RSA) suppression in response to challenge, whereas those rated higher on externalizing showed RSA augmentation. Additionally, select task changes in affect, behavior, and physiology predicted teacher-rated externalizing behaviors four months later. Findings indicate the Parent-Child Challenge Task was effective in producing regulatory changes and suggest its utility in assessing biobehavioral self-regulation and coregulation in parents and their preschoolers. PMID:28458616
Lunkenheimer, Erika; Kemp, Christine J; Lucas-Thompson, Rachel G; Cole, Pamela M; Albrecht, Erin C
2017-01-01
Researchers have argued for more dynamic and contextually relevant measures of regulatory processes in interpersonal interactions. In response, we introduce and examine the effectiveness of a new task, the Parent-Child Challenge Task, designed to assess the self-regulation and coregulation of affect, goal-directed behavior, and physiology in parents and their preschoolers in response to an experimental perturbation. Concurrent and predictive validity was examined via relations with children's externalizing behaviors. Mothers used only their words to guide their 3-year-old children to complete increasingly difficult puzzles in order to win a prize ( N = 96). A challenge condition was initiated mid-way through the task with a newly introduced time limit. The challenge produced decreases in parental teaching and dyadic behavioral variability and increases in child negative affect and dyadic affective variability, measured by dynamic systems-based methods. Children rated lower on externalizing showed respiratory sinus arrhythmia (RSA) suppression in response to challenge, whereas those rated higher on externalizing showed RSA augmentation. Additionally, select task changes in affect, behavior, and physiology predicted teacher-rated externalizing behaviors four months later. Findings indicate the Parent-Child Challenge Task was effective in producing regulatory changes and suggest its utility in assessing biobehavioral self-regulation and coregulation in parents and their preschoolers.
Reactive transport modeling in the subsurface environment with OGS-IPhreeqc
NASA Astrophysics Data System (ADS)
He, Wenkui; Beyer, Christof; Fleckenstein, Jan; Jang, Eunseon; Kalbacher, Thomas; Naumov, Dimitri; Shao, Haibing; Wang, Wenqing; Kolditz, Olaf
2015-04-01
Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.
Reactive transport modeling in variably saturated porous media with OGS-IPhreeqc
NASA Astrophysics Data System (ADS)
He, W.; Beyer, C.; Fleckenstein, J. H.; Jang, E.; Kalbacher, T.; Shao, H.; Wang, W.; Kolditz, O.
2014-12-01
Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasenkamp, Daren; Sim, Alexander; Wehner, Michael
Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, whilemore » we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.« less
Na, Y; Suh, T; Xing, L
2012-06-01
Multi-objective (MO) plan optimization entails generation of an enormous number of IMRT or VMAT plans constituting the Pareto surface, which presents a computationally challenging task. The purpose of this work is to overcome the hurdle by developing an efficient MO method using emerging cloud computing platform. As a backbone of cloud computing for optimizing inverse treatment planning, Amazon Elastic Compute Cloud with a master node (17.1 GB memory, 2 virtual cores, 420 GB instance storage, 64-bit platform) is used. The master node is able to scale seamlessly a number of working group instances, called workers, based on the user-defined setting account for MO functions in clinical setting. Each worker solved the objective function with an efficient sparse decomposition method. The workers are automatically terminated if there are finished tasks. The optimized plans are archived to the master node to generate the Pareto solution set. Three clinical cases have been planned using the developed MO IMRT and VMAT planning tools to demonstrate the advantages of the proposed method. The target dose coverage and critical structure sparing of plans are comparable obtained using the cloud computing platform are identical to that obtained using desktop PC (Intel Xeon® CPU 2.33GHz, 8GB memory). It is found that the MO planning speeds up the processing of obtaining the Pareto set substantially for both types of plans. The speedup scales approximately linearly with the number of nodes used for computing. With the use of N nodes, the computational time is reduced by the fitting model, 0.2+2.3/N, with r̂2>0.99, on average of the cases making real-time MO planning possible. A cloud computing infrastructure is developed for MO optimization. The algorithm substantially improves the speed of inverse plan optimization. The platform is valuable for both MO planning and future off- or on-line adaptive re-planning. © 2012 American Association of Physicists in Medicine.
ALMA Correlator Real-Time Data Processor
NASA Astrophysics Data System (ADS)
Pisano, J.; Amestica, R.; Perez, J.
2005-10-01
The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.
Staying Mindful in Action: The Challenge of "Double Awareness" on Task and Process in an Action Lab
ERIC Educational Resources Information Center
Svalgaard, Lotte
2016-01-01
Action Learning is a well-proven method to integrate "task" and "process", as learning about team and self (process) takes place while delivering on a task or business challenge of real importance (task). An Action Lab® is an intensive Action Learning programme lasting for 5 days, which aims at balancing and integrating…
Howard, Charla L; Perry, Bonnie; Chow, John W; Wallace, Chris; Stokic, Dobrivoje S
2017-11-01
Sensorimotor impairments after limb amputation impose a threat to stability. Commonly described strategies for maintaining stability are the posture first strategy (prioritization of balance) and posture second strategy (prioritization of concurrent tasks). The existence of these strategies was examined in 13 below-knee prosthesis users and 15 controls during dual-task standing under increasing postural and cognitive challenge by evaluating path length, 95% sway area, and anterior-posterior and medial-lateral amplitudes of the center of pressure. The subjects stood on two force platforms under usual (hard surface/eyes open) and difficult (soft surface/eyes closed) conditions, first alone and while performing a cognitive task without and then with instruction on cognitive prioritization. During standing alone, sway was not significantly different between groups. After adding the cognitive task without prioritization instruction, prosthesis users increased sway more under the dual-task than single-task standing (p ≤ 0.028) during both usual and difficult conditions, favoring the posture second strategy. Controls, however, reduced dual-task sway under a greater postural challenge (p ≤ 0.017), suggesting the posture first strategy. With prioritization of the cognitive task, sway was unchanged or reduced in prosthesis users, suggesting departure from the posture second strategy, whereas controls maintained the posture first strategy. Individual analysis of dual tasking revealed that greater postural demand in controls and greater cognitive challenge in prosthesis users led to both reduced sway and improved cognitive performance, suggesting cognitive-motor facilitation. Thus, activation of additional resources through increased alertness, rather than posture prioritization, may explain dual-task performance in both prosthesis users and controls under increasing postural and cognitive challenge.
Medial prefrontal cortex and the adaptive regulation of reinforcement learning parameters.
Khamassi, Mehdi; Enel, Pierre; Dominey, Peter Ford; Procyk, Emmanuel
2013-01-01
Converging evidence suggest that the medial prefrontal cortex (MPFC) is involved in feedback categorization, performance monitoring, and task monitoring, and may contribute to the online regulation of reinforcement learning (RL) parameters that would affect decision-making processes in the lateral prefrontal cortex (LPFC). Previous neurophysiological experiments have shown MPFC activities encoding error likelihood, uncertainty, reward volatility, as well as neural responses categorizing different types of feedback, for instance, distinguishing between choice errors and execution errors. Rushworth and colleagues have proposed that the involvement of MPFC in tracking the volatility of the task could contribute to the regulation of one of RL parameters called the learning rate. We extend this hypothesis by proposing that MPFC could contribute to the regulation of other RL parameters such as the exploration rate and default action values in case of task shifts. Here, we analyze the sensitivity to RL parameters of behavioral performance in two monkey decision-making tasks, one with a deterministic reward schedule and the other with a stochastic one. We show that there exist optimal parameter values specific to each of these tasks, that need to be found for optimal performance and that are usually hand-tuned in computational models. In contrast, automatic online regulation of these parameters using some heuristics can help producing a good, although non-optimal, behavioral performance in each task. We finally describe our computational model of MPFC-LPFC interaction used for online regulation of the exploration rate and its application to a human-robot interaction scenario. There, unexpected uncertainties are produced by the human introducing cued task changes or by cheating. The model enables the robot to autonomously learn to reset exploration in response to such uncertain cues and events. The combined results provide concrete evidence specifying how prefrontal cortical subregions may cooperate to regulate RL parameters. It also shows how such neurophysiologically inspired mechanisms can control advanced robots in the real world. Finally, the model's learning mechanisms that were challenged in the last robotic scenario provide testable predictions on the way monkeys may learn the structure of the task during the pretraining phase of the previous laboratory experiments. Copyright © 2013 Elsevier B.V. All rights reserved.
29 CFR 778.313 - Computing overtime pay under the Act for employees compensated on task basis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Computing overtime pay under the Act for employees compensated on task basis. 778.313 Section 778.313 Labor Regulations Relating to Labor (Continued) WAGE AND... TO REGULATIONS OVERTIME COMPENSATION Special Problems âtaskâ Basis of Payment § 778.313 Computing...
A Mathematical Model and Algorithm for Routing Air Traffic Under Weather Uncertainty
NASA Technical Reports Server (NTRS)
Sadovsky, Alexander V.
2016-01-01
A central challenge in managing today's commercial en route air traffic is the task of routing the aircraft in the presence of adverse weather. Such weather can make regions of the airspace unusable, so all affected flights must be re-routed. Today this task is carried out by conference and negotiation between human air traffic controllers (ATC) responsible for the involved sectors of the airspace. One can argue that, in so doing, ATC try to solve an optimization problem without giving it a precise quantitative formulation. Such a formulation gives the mathematical machinery for constructing and verifying algorithms that are aimed at solving the problem. This paper contributes one such formulation and a corresponding algorithm. The algorithm addresses weather uncertainty and has closed form, which allows transparent analysis of correctness, realism, and computational costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollman, David; Lifflander, Jonathon; Wilke, Jeremiah
2017-03-14
DARMA is a portability layer for asynchronous many-task (AMT) runtime systems. AMT runtime systems show promise to mitigate challenges imposed by next generation high performance computing architectures. However, current runtime system technologies are not production-ready. DARMA is a portability layer that seeks to insulate application developers from idiosyncrasies of individual runtime systems, thereby facilitating application-developer use of these technologies. DARMA comprises a frontend application programming interface (API) for application developers, a backend API for runtime system developers, and a translation that translates frontend API calls into backend API calls. Application developers use C++ abstractions to annotate both data and tasksmore » in their code. The DARMA translation layer uses C++ template metaprogramming to capture data-task dependencies, and provides this information to a potential backend runtime system via a series of backend API calls.« less
Digital Model-Based Engineering: Expectations, Prerequisites, and Challenges of Infusion
NASA Technical Reports Server (NTRS)
Hale, J. P.; Zimmerman, P.; Kukkala, G.; Guerrero, J.; Kobryn, P.; Puchek, B.; Bisconti, M.; Baldwin, C.; Mulpuri, M.
2017-01-01
Digital model-based engineering (DMbE) is the use of digital artifacts, digital environments, and digital tools in the performance of engineering functions. DMbE is intended to allow an organization to progress from documentation-based engineering methods to digital methods that may provide greater flexibility, agility, and efficiency. The term 'DMbE' was developed as part of an effort by the Model-Based Systems Engineering (MBSE) Infusion Task team to identify what government organizations might expect in the course of moving to or infusing MBSE into their organizations. The Task team was established by the Interagency Working Group on Engineering Complex Systems, an informal collaboration among government systems engineering organizations. This Technical Memorandum (TM) discusses the work of the MBSE Infusion Task team to date. The Task team identified prerequisites, expectations, initial challenges, and recommendations for areas of study to pursue, as well as examples of efforts already in progress. The team identified the following five expectations associated with DMbE infusion, discussed further in this TM: (1) Informed decision making through increased transparency, and greater insight. (2) Enhanced communication. (3) Increased understanding for greater flexibility/adaptability in design. (4) Increased confidence that the capability will perform as expected. (5) Increased efficiency. The team identified the following seven challenges an organization might encounter when looking to infuse DMbE: (1) Assessing value added to the organization. Not all DMbE practices will be applicable to every situation in every organization, and not all implementations will have positive results. (2) Overcoming organizational and cultural hurdles. (3) Adopting contractual practices and technical data management. (4) Redefining configuration management. The DMbE environment changes the range of configuration information to be managed to include performance and design models, database objects, as well as more traditional book-form objects and formats. (5) Developing information technology (IT) infrastructure. Approaches to implementing critical, enabling IT infrastructure capabilities must be flexible, reconfigurable, and updatable. (6) Ensuring security of the single source of truth (7) Potential overreliance on quantitative data over qualitative data. Executable/ computational models and simulations generally incorporate and generate quantitative vice qualitative data. The Task team also developed several recommendations for government, academia, and industry, as discussed in this TM. The Task team recommends continuing beyond this initial work to further develop the means of implementing DMbE and to look for opportunities to collaborate and share best practices.
Reinforcement learning in computer vision
NASA Astrophysics Data System (ADS)
Bernstein, A. V.; Burnaev, E. V.
2018-04-01
Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.
Artificial intelligence and design: Opportunities, research problems and directions
NASA Technical Reports Server (NTRS)
Amarel, Saul
1990-01-01
The issues of industrial productivity and economic competitiveness are of major significance in the U.S. at present. By advancing the science of design, and by creating a broad computer-based methodology for automating the design of artifacts and of industrial processes, we can attain dramatic improvements in productivity. It is our thesis that developments in computer science, especially in Artificial Intelligence (AI) and in related areas of advanced computing, provide us with a unique opportunity to push beyond the present level of computer aided automation technology and to attain substantial advances in the understanding and mechanization of design processes. To attain these goals, we need to build on top of the present state of AI, and to accelerate research and development in areas that are especially relevant to design problems of realistic complexity. We propose an approach to the special challenges in this area, which combines 'core work' in AI with the development of systems for handling significant design tasks. We discuss the general nature of design problems, the scientific issues involved in studying them with the help of AI approaches, and the methodological/technical issues that one must face in developing AI systems for handling advanced design tasks. Looking at basic work in AI from the perspective of design automation, we identify a number of research problems that need special attention. These include finding solution methods for handling multiple interacting goals, formation problems, problem decompositions, and redesign problems; choosing representations for design problems with emphasis on the concept of a design record; and developing approaches for the acquisition and structuring of domain knowledge with emphasis on finding useful approximations to domain theories. Progress in handling these research problems will have major impact both on our understanding of design processes and their automation, and also on several fundamental questions that are of intrinsic concern to AI. We present examples of current AI work on specific design tasks, and discuss new directions of research, both as extensions of current work and in the context of new design tasks where domain knowledge is either intractable or incomplete. The domains discussed include Digital Circuit Design, Mechanical Design of Rotational Transmissions, Design of Computer Architectures, Marine Design, Aircraft Design, and Design of Chemical Processes and Materials. Work in these domains is significant on technical grounds, and it is also important for economic and policy reasons.
Amplify scientific discovery with artificial intelligence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gil, Yolanda; Greaves, Mark T.; Hendler, James
Computing innovations have fundamentally changed many aspects of scientific inquiry. For example, advances in robotics, high-end computing, networking, and databases now underlie much of what we do in science such as gene sequencing, general number crunching, sharing information between scientists, and analyzing large amounts of data. As computing has evolved at a rapid pace, so too has its impact in science, with the most recent computing innovations repeatedly being brought to bear to facilitate new forms of inquiry. Recently, advances in Artificial Intelligence (AI) have deeply penetrated many consumer sectors, including for example Apple’s Siri™ speech recognition system, real-time automatedmore » language translation services, and a new generation of self-driving cars and self-navigating drones. However, AI has yet to achieve comparable levels of penetration in scientific inquiry, despite its tremendous potential in aiding computers to help scientists tackle tasks that require scientific reasoning. We contend that advances in AI will transform the practice of science as we are increasingly able to effectively and jointly harness human and machine intelligence in the pursuit of major scientific challenges.« less
Multi-Attribute Task Battery - Applications in pilot workload and strategic behavior research
NASA Technical Reports Server (NTRS)
Arnegard, Ruth J.; Comstock, J. R., Jr.
1991-01-01
The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.
The multi-attribute task battery for human operator workload and strategic behavior research
NASA Technical Reports Server (NTRS)
Comstock, J. Raymond, Jr.; Arnegard, Ruth J.
1992-01-01
The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to use nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.
Ergonomic assessment for the task of repairing computers in a manufacturing company: A case study.
Maldonado-Macías, Aidé; Realyvásquez, Arturo; Hernández, Juan Luis; García-Alcaraz, Jorge
2015-01-01
Manufacturing industry workers who repair computers may be exposed to ergonomic risk factors. This project analyzes the tasks involved in the computer repair process to (1) find the risk level for musculoskeletal disorders (MSDs) and (2) propose ergonomic interventions to address any ergonomic issues. Work procedures and main body postures were video recorded and analyzed using task analysis, the Rapid Entire Body Assessment (REBA) postural method, and biomechanical analysis. High risk for MSDs was found on every subtask using REBA. Although biomechanical analysis found an acceptable mass center displacement during tasks, a hazardous level of compression on the lower back during computer's transportation was detected. This assessment found ergonomic risks mainly in the trunk, arm/forearm, and legs; the neck and hand/wrist were also compromised. Opportunities for ergonomic analyses and interventions in the design and execution of computer repair tasks are discussed.
Scoring functions for protein-protein interactions.
Moal, Iain H; Moretti, Rocco; Baker, David; Fernández-Recio, Juan
2013-12-01
The computational evaluation of protein-protein interactions will play an important role in organising the wealth of data being generated by high-throughput initiatives. Here we discuss future applications, report recent developments and identify areas requiring further investigation. Many functions have been developed to quantify the structural and energetic properties of interacting proteins, finding use in interrelated challenges revolving around the relationship between sequence, structure and binding free energy. These include loop modelling, side-chain refinement, docking, multimer assembly, affinity prediction, affinity change upon mutation, hotspots location and interface design. Information derived from models optimised for one of these challenges can be used to benefit the others, and can be unified within the theoretical frameworks of multi-task learning and Pareto-optimal multi-objective learning. Copyright © 2013 Elsevier Ltd. All rights reserved.
WE-D-303-01: Development and Application of Digital Human Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segars, P.
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
Porter, Stephen C; Guo, Chao-Yu; Bacic, Janine; Chan, Eugenia
2011-01-26
Health care systems increasingly rely on patients' data entry efforts to organize and assist in care delivery through health information exchange. We sought to determine (1) the variation in burden imposed on parents by data entry efforts across paper-based and computer-based environments, and (2) the impact, if any, of parents' health literacy on the task burden. We completed a randomized controlled trial of parent-completed data entry tasks. Parents of children with attention deficit hyperactivity disorder (ADHD) were randomized based on the Test of Functional Health Literacy in Adults (TOFHLA) to either a paper-based or computer-based environment for entry of health information on their children. The primary outcome was the National Aeronautics and Space Administration Task Load Index (TLX) total weighted score. We screened 271 parents: 194 (71.6%) were eligible, and 180 of these (92.8%) constituted the study cohort. We analyzed 90 participants from each arm. Parents who completed information tasks on paper reported a higher task burden than those who worked in the computer environment: mean (SD) TLX scores were 22.8 (20.6) for paper and 16.3 (16.1) for computer. Assignment to the paper environment conferred a significant risk of higher task burden (F(1,178) = 4.05, P = .046). Adequate literacy was associated with lower task burden (decrease in burden score of 1.15 SD, P = .003). After adjusting for relevant child and parent factors, parents' TOFHLA score (beta = -.02, P = .02) and task environment (beta = .31, P = .03) remained significantly associated with task burden. A tailored computer-based environment provided an improved task experience for data entry compared to the same tasks completed on paper. Health literacy was inversely related to task burden.
Barbieri, Dechristian França; Srinivasan, Divya; Mathiassen, Svend Erik; Nogueira, Helen Cristina; Oliveira, Ana Beatriz
2015-01-01
Postures and muscle activity in the upper body were recorded from 50 academics office workers during 2 hours of normal work, categorised by observation into computer work (CW) and three non-computer (NC) tasks (NC seated work, NC standing/walking work and breaks). NC tasks differed significantly in exposures from CW, with standing/walking NC tasks representing the largest contrasts for most of the exposure variables. For the majority of workers, exposure variability was larger in their present job than in CW alone, as measured by the job variance ratio (JVR), i.e. the ratio between min-min variabilities in the job and in CW. Calculations of JVRs for simulated jobs containing different proportions of CW showed that variability could, indeed, be increased by redistributing available tasks, but that substantial increases could only be achieved by introducing more vigorous tasks in the job, in casu illustrated by cleaning.
Tommasino, Paolo; Campolo, Domenico
2017-01-01
A major challenge in robotics and computational neuroscience is relative to the posture/movement problem in presence of kinematic redundancy. We recently addressed this issue using a principled approach which, in conjunction with nonlinear inverse optimization, allowed capturing postural strategies such as Donders' law. In this work, after presenting this general model specifying it as an extension of the Passive Motion Paradigm, we show how, once fitted to capture experimental postural strategies, the model is actually able to also predict movements. More specifically, the passive motion paradigm embeds two main intrinsic components: joint damping and joint stiffness. In previous work we showed that joint stiffness is responsible for static postures and, in this sense, its parameters are regressed to fit to experimental postural strategies. Here, we show how joint damping, in particular its anisotropy, directly affects task-space movements. Rather than using damping parameters to fit a posteriori task-space motions, we make the a priori hypothesis that damping is proportional to stiffness. This remarkably allows a postural-fitted model to also capture dynamic performance such as curvature and hysteresis of task-space trajectories during wrist pointing tasks, confirming and extending previous findings in literature. PMID:29249954
Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline.
Zhang, Jie; Li, Qingyang; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin
2017-06-01
Alzheimer's Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms.
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.
Devi, D. Chitra; Uthariaraj, V. Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
What Machines Need to Learn to Support Human Problem-Solving
NASA Technical Reports Server (NTRS)
Vera, Alonso
2017-01-01
In the development of intelligent systems that interact with humans, there is often confusion between how the system functions with respect to the humans it interacts with and how it interfaces with those humans. The former is a much deeper challenge than the latter it requires a system-level understanding of evolving human roles as well as an understanding of what humans need to know (and when) in order to perform their tasks. This talk will focus on some of the challenges in getting this right as well as on the type of research and development that results in successful human-autonomy teaming. Brief Bio: Dr. Alonso Vera is Chief of the Human Systems Integration Division at NASA Ames Research Center. His expertise is in human-computer interaction, information systems, artificial intelligence, and computational human performance modeling. He has led the design, development and deployment of mission software systems across NASA robotic and human space flight missions, including Mars Exploration Rovers, Phoenix Mars Lander, ISS, Constellation, and Exploration Systems. Dr. Vera received a Bachelor of Science with First Class Honors from McGill University in 1985 and a Ph.D. from Cornell University in 1991. He went on to a Post-Doctoral Fellowship in the School of Computer Science at Carnegie Mellon University from 1990-93.
Women and Computers: Effects of Stereotype Threat on Attribution of Failure
ERIC Educational Resources Information Center
Koch, Sabine C.; Muller, Stephanie M.; Sieverding, Monika
2008-01-01
This study investigated whether stereotype threat can influence women's attributions of failure in a computer task. Male and female college-age students (n = 86, 16-21 years old) from Germany were asked to work on a computer task and were hinted beforehand that in this task, either (a) men usually perform better than women do (negative threat…
ERIC Educational Resources Information Center
Judd, Terry; Kennedy, Gregor
2011-01-01
Logs of on-campus computer and Internet usage were used to conduct a study of computer-based task switching and multitasking by undergraduate medical students. A detailed analysis of over 6000 individual sessions revealed that while a majority of students engaged in both task switching and multitasking behaviours, they did so less frequently than…
ERIC Educational Resources Information Center
Sesn, Burcin Acar
2013-01-01
The purpose of this study was to investigate pre-service science teachers' understanding of surface tension, cohesion and adhesion forces by using computer-mediated predict-observe-explain tasks. 22 third-year pre-service science teachers participated in this study. Three computer-mediated predict-observe-explain tasks were developed and applied…
ERIC Educational Resources Information Center
Roche, Anne; Clarke, Doug; Sullivan, Peter; Cheeseman, Jill
2013-01-01
This article promotes the use of mathematically appropriate, engaging and challenging tasks to support learning that is worthwhile. The authors share insights from a three-lesson design experiment and the three tasks along with the results from their implementation are explored.
Conceptual Transformation and Cognitive Processes in Origami Paper Folding
ERIC Educational Resources Information Center
Tenbrink, Thora; Taylor, Holly A.
2015-01-01
Research on problem solving typically does not address tasks that involve following detailed and/or illustrated step-by-step instructions. Such tasks are not seen as cognitively challenging problems to be solved. In this paper, we challenge this assumption by analyzing verbal protocols collected during an Origami folding task. Participants…
Report of the Task Force on Computer Charging.
ERIC Educational Resources Information Center
Computer Co-ordination Group, Ottawa (Ontario).
The objectives of the Task Force on Computer Charging as approved by the Committee of Presidents of Universities of Ontario were: (1) to identify alternative methods of costing computing services; (2) to identify alternative methods of pricing computing services; (3) to develop guidelines for the pricing of computing services; (4) to identify…
Energy Efficiency in Public Buildings through Context-Aware Social Computing
García, Óscar; Alonso, Ricardo S.; Prieto, Javier; Corchado, Juan M.
2017-01-01
The challenge of promoting behavioral changes in users that leads to energy savings in public buildings has become a complex task requiring the involvement of multiple technologies. Wireless sensor networks have a great potential for the development of tools, such as serious games, that encourage acquiring good energy and healthy habits among users in the workplace. This paper presents the development of a serious game using CAFCLA, a framework that allows for integrating multiple technologies, which provide both context-awareness and social computing. Game development has shown that the data provided by sensor networks encourage users to reduce energy consumption in their workplace and that social interactions and competitiveness allow for accelerating the achievement of good results and behavioral changes that favor energy savings. PMID:28398237
Data Reprocessing on Worldwide Distributed Systems
NASA Astrophysics Data System (ADS)
Wicke, Daniel
The DØ experiment faces many challenges in terms of enabling access to large datasets for physicists on four continents. The strategy for solving these problems on worldwide distributed computing clusters is presented. Since the beginning of Run II of the Tevatron (March 2001) all Monte-Carlo simulations for the experiment have been produced at remote systems. For data analysis, a system of regional analysis centers (RACs) was established which supply the associated institutes with the data. This structure, which is similar to the tiered structure foreseen for the LHC was used in Fall 2003 to reprocess all DØ data with a much improved version of the reconstruction software. This makes DØ the first running experiment that has implemented and operated all important computing tasks of a high energy physics experiment on systems distributed worldwide.
Parallel Distributed Processing Theory in the Age of Deep Networks.
Bowers, Jeffrey S
2017-12-01
Parallel distributed processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely that all knowledge is coded in a distributed format and cognition is mediated by non-symbolic computations. These claims have long been debated in cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks learn units that respond selectively to meaningful categories, and researchers are finding that deep networks need to be supplemented with symbolic systems to perform some tasks. Given the close links between PDP and deep networks, it is surprising that research with deep networks is challenging PDP theory. Copyright © 2017. Published by Elsevier Ltd.
Mining Software Usage with the Automatic Library Tracking Database (ALTD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadri, Bilel; Fahey, Mark R
2013-01-01
Tracking software usage is important for HPC centers, computer vendors, code developers and funding agencies to provide more efficient and targeted software support, and to forecast needs and guide HPC software effort towards the Exascale era. However, accurately tracking software usage on HPC systems has been a challenging task. In this paper, we present a tool called Automatic Library Tracking Database (ALTD) that has been developed and put in production on several Cray systems. The ALTD infrastructure prototype automatically and transparently stores information about libraries linked into an application at compilation time and also the executables launched in a batchmore » job. We will illustrate the usage of libraries, compilers and third party software applications on a system managed by the National Institute for Computational Sciences.« less
Comparison between genetic algorithm and self organizing map to detect botnet network traffic
NASA Astrophysics Data System (ADS)
Yugandhara Prabhakar, Shinde; Parganiha, Pratishtha; Madhu Viswanatham, V.; Nirmala, M.
2017-11-01
In Cyber Security world the botnet attacks are increasing. To detect botnet is a challenging task. Botnet is a group of computers connected in a coordinated fashion to do malicious activities. Many techniques have been developed and used to detect and prevent botnet traffic and the attacks. In this paper, a comparative study is done on Genetic Algorithm (GA) and Self Organizing Map (SOM) to detect the botnet network traffic. Both are soft computing techniques and used in this paper as data analytics system. GA is based on natural evolution process and SOM is an Artificial Neural Network type, uses unsupervised learning techniques. SOM uses neurons and classifies the data according to the neurons. Sample of KDD99 dataset is used as input to GA and SOM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarthy, J.M.
The theory and methodology of design of general-purpose machines that may be controlled by a computer to perform all the tasks of a set of special-purpose machines is the focus of modern machine design research. These seventeen contributions chronicle recent activity in the analysis and design of robot manipulators that are the prototype of these general-purpose machines. They focus particularly on kinematics, the geometry of rigid-body motion, which is an integral part of machine design theory. The challenges to kinematics researchers presented by general-purpose machines such as the manipulator are leading to new perspectives in the design and control ofmore » simpler machines with two, three, and more degrees of freedom. Researchers are rethinking the uses of gear trains, planar mechanisms, adjustable mechanisms, and computer controlled actuators in the design of modern machines.« less
Ogawa, K
1992-01-01
This paper proposes a new evaluation and prediction method for computer usability. This method is based on our two previously proposed information transmission measures created from a human-to-computer information transmission model. The model has three information transmission levels: the device, software, and task content levels. Two measures, called the device independent information measure (DI) and the computer independent information measure (CI), defined on the software and task content levels respectively, are given as the amount of information transmitted. Two information transmission rates are defined as DI/T and CI/T, where T is the task completion time: the device independent information transmission rate (RDI), and the computer independent information transmission rate (RCI). The method utilizes the RDI and RCI rates to evaluate relatively the usability of software and device operations on different computer systems. Experiments using three different systems, in this case a graphical information input task, confirm that the method offers an efficient way of determining computer usability.
Real-time yield estimation based on deep learning
NASA Astrophysics Data System (ADS)
Rahnemoonfar, Maryam; Sheppard, Clay
2017-05-01
Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.
Machine Learning Approaches in Cardiovascular Imaging.
Henglin, Mir; Stein, Gillian; Hushcha, Pavel V; Snoek, Jasper; Wiltschko, Alexander B; Cheng, Susan
2017-10-01
Cardiovascular imaging technologies continue to increase in their capacity to capture and store large quantities of data. Modern computational methods, developed in the field of machine learning, offer new approaches to leveraging the growing volume of imaging data available for analyses. Machine learning methods can now address data-related problems ranging from simple analytic queries of existing measurement data to the more complex challenges involved in analyzing raw images. To date, machine learning has been used in 2 broad and highly interconnected areas: automation of tasks that might otherwise be performed by a human and generation of clinically important new knowledge. Most cardiovascular imaging studies have focused on task-oriented problems, but more studies involving algorithms aimed at generating new clinical insights are emerging. Continued expansion in the size and dimensionality of cardiovascular imaging databases is driving strong interest in applying powerful deep learning methods, in particular, to analyze these data. Overall, the most effective approaches will require an investment in the resources needed to appropriately prepare such large data sets for analyses. Notwithstanding current technical and logistical challenges, machine learning and especially deep learning methods have much to offer and will substantially impact the future practice and science of cardiovascular imaging. © 2017 American Heart Association, Inc.
Advances in the production of freeform optical surfaces
NASA Astrophysics Data System (ADS)
Tohme, Yazid E.; Luniya, Suneet S.
2007-05-01
Recent market demands for free-form optics have challenged the industry to find new methods and techniques to manufacture free-form optical surfaces with a high level of accuracy and reliability. Production techniques are becoming a mix of multi-axis single point diamond machining centers or deterministic ultra precision grinding centers coupled with capable measurement systems to accomplish the task. It has been determined that a complex software tool is required to seamlessly integrate all aspects of the manufacturing process chain. Advances in computational power and improved performance of computer controlled precision machinery have driven the use of such software programs to measure, visualize, analyze, produce and re-validate the 3D free-form design thus making the process of manufacturing such complex surfaces a viable task. Consolidation of the entire production cycle in a comprehensive software tool that can interact with all systems in design, production and measurement phase will enable manufacturers to solve these complex challenges providing improved product quality, simplified processes, and enhanced performance. The work being presented describes the latest advancements in developing such software package for the entire fabrication process chain for aspheric and free-form shapes. It applies a rational B-spline based kernel to transform an optical design in the form of parametrical definition (optical equation), standard CAD format, or a cloud of points to a central format that drives the simulation. This software tool creates a closed loop for the fabrication process chain. It integrates surface analysis and compensation, tool path generation, and measurement analysis in one package.
Job Management and Task Bundling
NASA Astrophysics Data System (ADS)
Berkowitz, Evan; Jansen, Gustav R.; McElvain, Kenneth; Walker-Loud, André
2018-03-01
High Performance Computing is often performed on scarce and shared computing resources. To ensure computers are used to their full capacity, administrators often incentivize large workloads that are not possible on smaller systems. Measurements in Lattice QCD frequently do not scale to machine-size workloads. By bundling tasks together we can create large jobs suitable for gigantic partitions. We discuss METAQ and mpi_jm, software developed to dynamically group computational tasks together, that can intelligently backfill to consume idle time without substantial changes to users' current workflows or executables.
Radiological protection in computed tomography and cone beam computed tomography.
Rehani, M M
2015-06-01
The International Commission on Radiological Protection (ICRP) has sustained interest in radiological protection in computed tomography (CT), and ICRP Publications 87 and 102 focused on the management of patient doses in CT and multi-detector CT (MDCT) respectively. ICRP forecasted and 'sounded the alarm' on increasing patient doses in CT, and recommended actions for manufacturers and users. One of the approaches was that safety is best achieved when it is built into the machine, rather than left as a matter of choice for users. In view of upcoming challenges posed by newer systems that use cone beam geometry for CT (CBCT), and their widened usage, often by untrained users, a new ICRP task group has been working on radiological protection issues in CBCT. Some of the issues identified by the task group are: lack of standardisation of dosimetry in CBCT; the false belief within the medical and dental community that CBCT is a 'light', low-dose CT whereas mobile CBCT units and newer applications, particularly C-arm CT in interventional procedures, involve higher doses; lack of training in radiological protection among clinical users; and lack of dose information and tracking in many applications. This paper provides a summary of approaches used in CT and MDCT, and preliminary information regarding work just published for radiological protection in CBCT. © The International Society for Prosthetics and Orthotics Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Human Motion Tracking and Glove-Based User Interfaces for Virtual Environments in ANVIL
NASA Technical Reports Server (NTRS)
Dumas, Joseph D., II
2002-01-01
The Army/NASA Virtual Innovations Laboratory (ANVIL) at Marshall Space Flight Center (MSFC) provides an environment where engineers and other personnel can investigate novel applications of computer simulation and Virtual Reality (VR) technologies. Among the many hardware and software resources in ANVIL are several high-performance Silicon Graphics computer systems and a number of commercial software packages, such as Division MockUp by Parametric Technology Corporation (PTC) and Jack by Unigraphics Solutions, Inc. These hardware and software platforms are used in conjunction with various VR peripheral I/O (input / output) devices, CAD (computer aided design) models, etc. to support the objectives of the MSFC Engineering Systems Department/Systems Engineering Support Group (ED42) by studying engineering designs, chiefly from the standpoint of human factors and ergonomics. One of the more time-consuming tasks facing ANVIL personnel involves the testing and evaluation of peripheral I/O devices and the integration of new devices with existing hardware and software platforms. Another important challenge is the development of innovative user interfaces to allow efficient, intuitive interaction between simulation users and the virtual environments they are investigating. As part of his Summer Faculty Fellowship, the author was tasked with verifying the operation of some recently acquired peripheral interface devices and developing new, easy-to-use interfaces that could be used with existing VR hardware and software to better support ANVIL projects.
Developing a Science Commons for Geosciences
NASA Astrophysics Data System (ADS)
Lenhardt, W. C.; Lander, H.
2016-12-01
Many scientific communities, recognizing the research possibilities inherent in data sets, have created domain specific archives such as the Incorporated Research Institutions for Seismology (iris.edu) and ClinicalTrials.gov. Though this is an important step forward, most scientists, including geoscientists, also use a variety of software tools and at least some amount of computation to conduct their research. While the archives make it simpler for scientists to locate the required data, provisioning disk space, compute resources, and network bandwidth can still require significant efforts. This challenge exists despite the wealth of resources available to researchers, namely lab IT resources, institutional IT resources, national compute resources (XSEDE, OSG), private clouds, public clouds, and the development of cyberinfrastructure technologies meant to facilitate use of those resources. Further tasks include obtaining and installing required tools for analysis and visualization. If the research effort is a collaboration or involves certain types of data, then the partners may well have additional non-scientific tasks such as securing the data and developing secure sharing methods for the data. These requirements motivate our investigations into the "Science Commons". This paper will present a working definition of a science commons, compare and contrast examples of existing science commons, and describe a project based at RENCI to implement a science commons for risk analytics. We will then explore what a similar tool might look like for the geosciences.
Prisman, Eitan; Daly, Michael J; Chan, Harley; Siewerdsen, Jeffrey H; Vescan, Allan; Irish, Jonathan C
2011-01-01
Custom software was developed to integrate intraoperative cone-beam computed tomography (CBCT) images with endoscopic video for surgical navigation and guidance. A cadaveric head was used to assess the accuracy and potential clinical utility of the following functionality: (1) real-time tracking of the endoscope in intraoperative 3-dimensional (3D) CBCT; (2) projecting an orthogonal reconstructed CBCT image, at or beyond the endoscope, which is parallel to the tip of the endoscope corresponding to the surgical plane; (3) virtual reality fusion of endoscopic video and 3D CBCT surface rendering; and (4) overlay of preoperatively defined contours of anatomical structures of interest. Anatomical landmarks were contoured in CBCT of a cadaveric head. An experienced endoscopic surgeon was oriented to the software and asked to rate the utility of the navigation software in carrying out predefined surgical tasks. Utility was evaluated using a rating scale for: (1) safely completing the task; and (2) potential for surgical training. Surgical tasks included: (1) uncinectomy; (2) ethmoidectomy; (3) sphenoidectomy/pituitary resection; and (4) clival resection. CBCT images were updated following each ablative task. As a teaching tool, the software was evaluated as "very useful" for all surgical tasks. Regarding safety and task completion, the software was evaluated as "no advantage" for task (1), "minimal" for task (2), and "very useful" for tasks (3) and (4). Landmark identification for structures behind bone was "very useful" for both categories. The software increased surgical confidence in safely completing challenging ablative tasks by presenting real-time image guidance for highly complex ablative procedures. In addition, such technology offers a valuable teaching aid to surgeons in training. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.
Straus, S G; McGrath, J E
1994-02-01
The authors investigated the hypothesis that as group tasks pose greater requirements for member interdependence, communication media that transmit more social context cues will foster group performance and satisfaction. Seventy-two 3-person groups of undergraduate students worked in either computer-mediated or face-to-face meetings on 3 tasks with increasing levels of interdependence: an idea-generation task, an intellective task, and a judgment task. Results showed few differences between computer-mediated and face-to-face groups in the quality of the work completed but large differences in productivity favoring face-to-face groups. Analysis of productivity and of members' reactions supported the predicted interaction of tasks and media, with greater discrepancies between media conditions for tasks requiring higher levels of coordination. Results are discussed in terms of the implications of using computer-mediated communications systems for group work.
Taib, Mohd Firdaus Mohd; Bahn, Sangwoo; Yun, Myung Hwan
2016-06-27
The popularity of mobile computing products is well known. Thus, it is crucial to evaluate their contribution to musculoskeletal disorders during computer usage under both comfortable and stressful environments. This study explores the effect of different computer products' usages with different tasks used to induce psychosocial stress on muscle activity. Fourteen male subjects performed computer tasks: sixteen combinations of four different computer products with four different tasks used to induce stress. Electromyography for four muscles on the forearm, shoulder and neck regions and task performances were recorded. The increment of trapezius muscle activity was dependent on the task used to induce the stress where a higher level of stress made a greater increment. However, this relationship was not found in the other three muscles. Besides that, compared to desktop and laptop use, the lowest activity for all muscles was obtained during the use of a tablet or smart phone. The best net performance was obtained in a comfortable environment. However, during stressful conditions, the best performance can be obtained using the device that a user is most comfortable with or has the most experience with. Different computer products and different levels of stress play a big role in muscle activity during computer work. Both of these factors must be taken into account in order to reduce the occurrence of musculoskeletal disorders or problems.
The Brain as an Efficient and Robust Adaptive Learner.
Denève, Sophie; Alemi, Alireza; Bourdoukan, Ralph
2017-06-07
Understanding how the brain learns to compute functions reliably, efficiently, and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent networks, e.g., the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level. Copyright © 2017 Elsevier Inc. All rights reserved.
Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images †
Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao
2017-01-01
Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications. PMID:28604624
Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.
Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao
2017-06-12
Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.
Al-Sahaf, Harith; Zhang, Mengjie; Johnston, Mark
2016-01-01
In the computer vision and pattern recognition fields, image classification represents an important yet difficult task. It is a challenge to build effective computer models to replicate the remarkable ability of the human visual system, which relies on only one or a few instances to learn a completely new class or an object of a class. Recently we proposed two genetic programming (GP) methods, one-shot GP and compound-GP, that aim to evolve a program for the task of binary classification in images. The two methods are designed to use only one or a few instances per class to evolve the model. In this study, we investigate these two methods in terms of performance, robustness, and complexity of the evolved programs. We use ten data sets that vary in difficulty to evaluate these two methods. We also compare them with two other GP and six non-GP methods. The results show that one-shot GP and compound-GP outperform or achieve results comparable to competitor methods. Moreover, the features extracted by these two methods improve the performance of other classifiers with handcrafted features and those extracted by a recently developed GP-based method in most cases.
Categories of Computer Use and Their Relationships with Attitudes toward Computers.
ERIC Educational Resources Information Center
Mitra, Anandra
1998-01-01
Analysis of attitude and use questionnaires completed by undergraduates (n1,444) at Wake Forest University determined that computers were used most frequently for word processing. Other uses were e-mail for task and non-task activities and mathematical and statistical computation. Results suggest that the level of computer use was related to…
An efficient dynamic load balancing algorithm
NASA Astrophysics Data System (ADS)
Lagaros, Nikos D.
2014-01-01
In engineering problems, randomness and uncertainties are inherent. Robust design procedures, formulated in the framework of multi-objective optimization, have been proposed in order to take into account sources of randomness and uncertainty. These design procedures require orders of magnitude more computational effort than conventional analysis or optimum design processes since a very large number of finite element analyses is required to be dealt. It is therefore an imperative need to exploit the capabilities of computing resources in order to deal with this kind of problems. In particular, parallel computing can be implemented at the level of metaheuristic optimization, by exploiting the physical parallelization feature of the nondominated sorting evolution strategies method, as well as at the level of repeated structural analyses required for assessing the behavioural constraints and for calculating the objective functions. In this study an efficient dynamic load balancing algorithm for optimum exploitation of available computing resources is proposed and, without loss of generality, is applied for computing the desired Pareto front. In such problems the computation of the complete Pareto front with feasible designs only, constitutes a very challenging task. The proposed algorithm achieves linear speedup factors and almost 100% speedup factor values with reference to the sequential procedure.
Weyand, Sabine; Takehara-Nishiuchi, Kaori; Chau, Tom
2015-10-30
Near-infrared spectroscopy (NIRS) brain-computer interfaces (BCIs) enable users to interact with their environment using only cognitive activities. This paper presents the results of a comparison of four methodological frameworks used to select a pair of tasks to control a binary NIRS-BCI; specifically, three novel personalized task paradigms and the state-of-the-art prescribed task framework were explored. Three types of personalized task selection approaches were compared, including: user-selected mental tasks using weighted slope scores (WS-scores), user-selected mental tasks using pair-wise accuracy rankings (PWAR), and researcher-selected mental tasks using PWAR. These paradigms, along with the state-of-the-art prescribed mental task framework, where mental tasks are selected based on the most commonly used tasks in literature, were tested by ten able-bodied participants who took part in five NIRS-BCI sessions. The frameworks were compared in terms of their accuracy, perceived ease-of-use, computational time, user preference, and length of training. Most notably, researcher-selected personalized tasks resulted in significantly higher accuracies, while user-selected personalized tasks resulted in significantly higher perceived ease-of-use. It was also concluded that PWAR minimized the amount of data that needed to be collected; while, WS-scores maximized user satisfaction and minimized computational time. In comparison to the state-of-the-art prescribed mental tasks, our findings show that overall, personalized tasks appear to be superior to prescribed tasks with respect to accuracy and perceived ease-of-use. The deployment of personalized rather than prescribed mental tasks ought to be considered and further investigated in future NIRS-BCI studies. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Hong, Dae S.; Choi, Kyong Mi; Hwang, Jihyun; Runnalls, Cristina
2017-01-01
In this study, we examined 10 integral lessons to understand students' opportunities to learn cognitively challenging tasks and maintain cognitive demand during integral lessons. Our findings reveal issues with implemented tasks as well as the way these tasks were presented to students. We also examined mathematicians' reasons behind their…
Surgical simulation tasks challenge visual working memory and visual-spatial ability differently.
Schlickum, Marcus; Hedman, Leif; Enochsson, Lars; Henningsohn, Lars; Kjellin, Ann; Felländer-Tsai, Li
2011-04-01
New strategies for selection and training of physicians are emerging. Previous studies have demonstrated a correlation between visual-spatial ability and visual working memory with surgical simulator performance. The aim of this study was to perform a detailed analysis on how these abilities are associated with metrics in simulator performance with different task content. The hypothesis is that the importance of visual-spatial ability and visual working memory varies with different task contents. Twenty-five medical students participated in the study that involved testing visual-spatial ability using the MRT-A test and visual working memory using the RoboMemo computer program. Subjects were also trained and tested for performance in three different surgical simulators. The scores from the psychometric tests and the performance metrics were then correlated using multivariate analysis. MRT-A score correlated significantly with the performance metrics Efficiency of screening (p = 0.006) and Total time (p = 0.01) in the GI Mentor II task and Total score (p = 0.02) in the MIST-VR simulator task. In the Uro Mentor task, both the MRT-A score and the visual working memory 3-D cube test score as presented in the RoboMemo program (p = 0.02) correlated with Total score (p = 0.004). In this study we have shown that some differences exist regarding the impact of visual abilities and task content on simulator performance. When designing future cognitive training programs and testing regimes, one might have to consider that the design must be adjusted in accordance with the specific surgical task to be trained in mind.
Koperwhats, Martha A; Chang, Wei-Chih; Xiao, Jianguo
2002-01-01
Digital imaging technology promises efficient, economical, and fast service for patient care, but the challenges are great in the transition from film to a filmless (digital) environment. This change has a significant impact on the film library's personnel (film librarians) who play a leading roles in storage, classification, and retrieval of images. The objectives of this project were to study film library errors and the usability of a physical computerized system that could not be changed, while developing an intervention to reduce errors and test the usability of the intervention. Cognitive and human factors analysis were used to evaluate human-computer interaction. A workflow analysis was performed to understand the film and digital imaging processes. User and task analyses were applied to account for all behaviors involved in interaction with the system. A heuristic evaluation was used to probe the usability issues in the picture archiving and communication systems (PACS) modules. Simplified paper-based instructions were designed to familiarize the film librarians with the digital system. A usability survey evaluated the effectiveness of the instruction. The user and task analyses indicated that different users faced challenges based on their computer literacy, education, roles, and frequency of use of diagnostic imaging. The workflow analysis showed that the approaches to using the digital library differ among the various departments. The heuristic evaluation of the PACS modules showed the human-computer interface to have usability issues that prevented easy operation. Simplified instructions were designed for operation of the modules. Usability surveys conducted before and after revision of the instructions showed that performance improved. Cognitive and human factor analysis can help film librarians and other users adapt to the filmless system. Use of cognitive science tools will aid in successful transition of the film library from a film environment to a digital environment.
Grantcharov, T P; Bardram, L; Funch-Jensen, P; Rosenberg, J
2003-07-01
The impact of gender and hand dominance on operative performance may be a subject of prejudice among surgeons, reportedly leading to discrimination and lack of professional promotion. However, very little objective evidence is available yet on the matter. This study was conducted to identify factors that influence surgeons' performance, as measured by a virtual reality computer simulator for laparoscopic surgery. This study included 25 surgical residents who had limited experience with laparoscopic surgery, having performed fewer than 10 laparoscopic cholecystectomies. The participants were registered according to their gender, hand dominance, and experience with computer games. All of the participants performed 10 repetitions of the six tasks on the Minimally Invasive Surgical Trainer-Virtual Reality (MIST-VR) within 1 month. Assessment of laparoscopic skills was based on three parameters measured by the simulator: time, errors, and economy of hand movement. Differences in performance existed between the compared groups. Men completed the tasks in less time than women ( p = 0.01, Mann-Whitney test), but there was no statistical difference between the genders in the number of errors and unnecessary movements. Individuals with right hand dominance performed fewer unnecessary movements ( p = 0.045, Mann-Whitney test), and there was a trend toward better results in terms of time and errors among the residence with right hand dominance than among those with left dominance. Users of computer games made fewer errors than nonusers ( p = 0.035, Mann-Whitney test). The study provides objective evidence of a difference in laparoscopic skills between surgeons differing gender, hand dominance, and computer experience. These results may influence the future development of training program for laparoscopic surgery. They also pose a challenge to individuals responsible for the selection and training of the residents.
Farahani, Navid; Liu, Zheng; Jutt, Dylan; Fine, Jeffrey L
2017-10-01
- Pathologists' computer-assisted diagnosis (pCAD) is a proposed framework for alleviating challenges through the automation of their routine sign-out work. Currently, hypothetical pCAD is based on a triad of advanced image analysis, deep integration with heterogeneous information systems, and a concrete understanding of traditional pathology workflow. Prototyping is an established method for designing complex new computer systems such as pCAD. - To describe, in detail, a prototype of pCAD for the sign-out of a breast cancer specimen. - Deidentified glass slides and data from breast cancer specimens were used. Slides were digitized into whole-slide images with an Aperio ScanScope XT, and screen captures were created by using vendor-provided software. The advanced workflow prototype was constructed by using PowerPoint software. - We modeled an interactive, computer-assisted workflow: pCAD previews whole-slide images in the context of integrated, disparate data and predefined diagnostic tasks and subtasks. Relevant regions of interest (ROIs) would be automatically identified and triaged by the computer. A pathologist's sign-out work would consist of an interactive review of important ROIs, driven by required diagnostic tasks. The interactive session would generate a pathology report automatically. - Using animations and real ROIs, the pCAD prototype demonstrates the hypothetical sign-out in a stepwise fashion, illustrating various interactions and explaining how steps can be automated. The file is publicly available and should be widely compatible. This mock-up is intended to spur discussion and to help usher in the next era of digitization for pathologists by providing desperately needed and long-awaited automation.
ERIC Educational Resources Information Center
Paisley, William; Butler, Matilda
This study of the computer/user interface investigated the role of the computer in performing information tasks that users now perform without computer assistance. Users' perceptual/cognitive processes are to be accelerated or augmented by the computer; a long term goal is to delegate information tasks entirely to the computer. Cybernetic and…
ERIC Educational Resources Information Center
Ihme, Jan Marten; Senkbeil, Martin; Goldhammer, Frank; Gerick, Julia
2017-01-01
The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and…
EEG and Eye Tracking Demonstrate Vigilance Enhancement with Challenge Integration
Bodala, Indu P.; Li, Junhua; Thakor, Nitish V.; Al-Nashash, Hasan
2016-01-01
Maintaining vigilance is possibly the first requirement for surveillance tasks where personnel are faced with monotonous yet intensive monitoring tasks. Decrement in vigilance in such situations could result in dangerous consequences such as accidents, loss of life and system failure. In this paper, we investigate the possibility to enhance vigilance or sustained attention using “challenge integration,” a strategy that integrates a primary task with challenging stimuli. A primary surveillance task (identifying an intruder in a simulated factory environment) and a challenge stimulus (periods of rain obscuring the surveillance scene) were employed to test the changes in vigilance levels. The effect of integrating challenging events (resulting from artificially simulated rain) into the task were compared to the initial monotonous phase. EEG and eye tracking data is collected and analyzed for n = 12 subjects. Frontal midline theta power and frontal theta to parietal alpha power ratio which are used as measures of engagement and attention allocation show an increase due to challenge integration (p < 0.05 in each case). Relative delta band power of EEG also shows statistically significant suppression on the frontoparietal and occipital cortices due to challenge integration (p < 0.05). Saccade amplitude, saccade velocity and blink rate obtained from eye tracking data exhibit statistically significant changes during the challenge phase of the experiment (p < 0.05 in each case). From the correlation analysis between the statistically significant measures of eye tracking and EEG, we infer that saccade amplitude and saccade velocity decrease with vigilance decrement along with frontal midline theta and frontal theta to parietal alpha ratio. Conversely, blink rate and relative delta power increase with vigilance decrement. However, these measures exhibit a reverse trend when challenge stimulus appears in the task suggesting vigilance enhancement. Moreover, the mean reaction time is lower for the challenge integrated phase (RTmean = 3.65 ± 1.4s) compared to initial monotonous phase without challenge (RTmean = 4.6 ± 2.7s). Our work shows that vigilance level, as assessed by response of these vital signs, is enhanced by challenge integration. PMID:27375464
The Effects of Respiratory Sinus Arrhythmia on Anger Reactivity and Persistence in Major Depression
Ellis, Alissa J.; Shumake, Jason; Beevers, Christopher G.
2016-01-01
The experience of anger during a depressive episode has recently been identified as a poor prognostic indicator of illness course. Given the clinical implications of anger in major depressive disorder (MDD), understanding the mechanisms involved in anger reactivity and persistence is critical for improved intervention. Biological processes involved in emotion regulation during stress, such as respiratory sinus arrhythmia (RSA), may play a role in maintaining negative moods. Clinically depressed (MDD) (n=49) and non-depressed (non-MDD) (n=50) individuals were challenged with a stressful computer task shown to increase anger, while RSA (high frequency range 0.15–0.4 Hz) was collected. RSA predicted future anger, but was unrelated to current anger. That is, across participants, low baseline RSA predicted anger reactivity during the task, and in depressed individuals, those with low RSA during the task had a greater likelihood of anger persistence during a recovery period. These results suggest that low RSA may be a psychophysiological process involved in anger regulation in depression. Low RSA may contribute to sustained illness course by diminishing the repair of angry moods. PMID:27401801
Sensor Data Distribution With Robustness and Reliability: Toward Distributed Components Model
NASA Technical Reports Server (NTRS)
Alena, Richard L.; Lee, Charles
2005-01-01
In planetary surface exploration mission, sensor data distribution is required in many aspects, for example, in navigation, scheduling, planning, monitoring, diagnostics, and automation of the field tasks. The challenge is to distribute such data in the robust and reliable way so that we can minimize the errors caused by miscalculations, and misjudgments that based on the error data input in the mission. The ad-hoc wireless network on planetary surface is not constantly connected because of the nature of the rough terrain and lack of permanent establishments on the surface. There are some disconnected moments that the computation nodes will re-associate with different repeaters or access points until connections are reestablished. Such a nature requires our sensor data distribution software robust and reliable with ability to tolerant disconnected moments. This paper presents a distributed components model as a framework to accomplish such tasks. The software is written in Java and utilized the available Java Message Services schema and the Boss implementation. The results of field experimentations show that the model is very effective in completing the tasks.
Human visual system-based smoking event detection
NASA Astrophysics Data System (ADS)
Odetallah, Amjad D.; Agaian, Sos S.
2012-06-01
Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.
Belger, A; Banich, M T
1998-07-01
Because interaction of the cerebral hemispheres has been found to aid task performance under demanding conditions, the present study examined how this effect is moderated by computational complexity, the degree of lateralization for a task, and individual differences in asymmetric hemispheric activation (AHA). Computational complexity was manipulated across tasks either by increasing the number of inputs to be processed or by increasing the number of steps to a decision. Comparison of within- and across-hemisphere trials indicated that the size of the between-hemisphere advantage increased as a function of task complexity, except for a highly lateralized rhyme decision task that can only be performed by the left hemisphere. Measures of individual differences in AHA revealed that when task demands and an individual's AHA both load on the same hemisphere, the ability to divide the processing between the hemispheres is limited. Thus, interhemispheric division of processing improves performance at higher levels of computational complexity only when the required operations can be divided between the hemispheres.
WELDSMART: A vision-based expert system for quality control
NASA Technical Reports Server (NTRS)
Andersen, Kristinn; Barnett, Robert Joel; Springfield, James F.; Cook, George E.
1992-01-01
This work was aimed at exploring means for utilizing computer technology in quality inspection and evaluation. Inspection of metallic welds was selected as the main application for this development and primary emphasis was placed on visual inspection, as opposed to other inspection methods, such as radiographic techniques. Emphasis was placed on methodologies with the potential for use in real-time quality control systems. Because quality evaluation is somewhat subjective, despite various efforts to classify discontinuities and standardize inspection methods, the task of using a computer for both inspection and evaluation was not trivial. The work started out with a review of the various inspection techniques that are used for quality control in welding. Among other observations from this review was the finding that most weld defects result in abnormalities that may be seen by visual inspection. This supports the approach of emphasizing visual inspection for this work. Quality control consists of two phases: (1) identification of weld discontinuities (some of which may be severe enough to be classified as defects), and (2) assessment or evaluation of the weld based on the observed discontinuities. Usually the latter phase results in a pass/fail judgement for the inspected piece. It is the conclusion of this work that the first of the above tasks, identification of discontinuities, is the most challenging one. It calls for sophisticated image processing and image analysis techniques, and frequently ad hoc methods have to be developed to identify specific features in the weld image. The difficulty of this task is generally not due to limited computing power. In most cases it was found that a modest personal computer or workstation could carry out most computations in a reasonably short time period. Rather, the algorithms and methods necessary for identifying weld discontinuities were in some cases limited. The fact that specific techniques were finally developed and successfully demosntrated to work illustrates that the general approach taken here appears to be promising for commercial development of computerized quality inspection systems. Inspection based on these techniques may be used to supplement or substitute more elaborate inspection methods, such as x-ray inspections.
Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models
2017-01-01
We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927
Hypercalculia in savant syndrome: central executive failure?
González-Garrido, Andrés Antonio; Ruiz-Sandoval, José Luis; Gómez-Velázquez, Fabiola R; de Alba, José Luis Oropeza; Villaseñor-Cabrera, Teresa
2002-01-01
The existence of outstanding cognitive talent in mentally retarded subjects persists as a challenge to present knowledge. We report the case of a 16-year-old male patient with exceptional mental calculation abilities and moderate mental retardation. The patient was clinically evaluated. Data from standard magnetic resonance imaging (MRI) and two 99mTc-ethyl cysteine dimer (ECD)-single photon emission computer tomography (SPECT) (in resting condition and performing a mental calculation task) studies were analyzed. Main neurologic findings were brachycephalia, right-side neurologic soft signs, obsessive personality profile, low color-word interference effect in Stroop test, and diffuse increased cerebral blood flow during calculation task in 99mTc-ECD SPECT. MRI showed anatomical temporal plane inverse asymmetry. Evidence appears to support the hypothesis that savant skill is related to excessive and erroneous use of cognitive processing resources instigated by probable failure in central executive control mechanisms.
Empowering Older Patients to Engage in Self Care: Designing an Interactive Robotic Device
Tiwari, Priyadarshi; Warren, Jim; Day, Karen
2011-01-01
Objectives: To develop and test an interactive robot mounted computing device to support medication management as an example of a complex self-care task in older adults. Method: A Grounded Theory (GT), Participatory Design (PD) approach was used within three Action Research (AR) cycles to understand design requirements and test the design configuration addressing the unique task requirements. Results: At the end of the first cycle a conceptual framework was evolved. The second cycle informed architecture and interface design. By the end of third cycle residents successfully interacted with the dialogue system and were generally satisfied with the robot. The results informed further refinement of the prototype. Conclusion: An interactive, touch screen based, robot-mounted information tool can be developed to support healthcare needs of older people. Qualitative methods such as the hybrid GT-PD-AR approach may be particularly helpful for innovating and articulating design requirements in challenging situations. PMID:22195203
Empowering older patients to engage in self care: designing an interactive robotic device.
Tiwari, Priyadarshi; Warren, Jim; Day, Karen
2011-01-01
To develop and test an interactive robot mounted computing device to support medication management as an example of a complex self-care task in older adults. A Grounded Theory (GT), Participatory Design (PD) approach was used within three Action Research (AR) cycles to understand design requirements and test the design configuration addressing the unique task requirements. At the end of the first cycle a conceptual framework was evolved. The second cycle informed architecture and interface design. By the end of third cycle residents successfully interacted with the dialogue system and were generally satisfied with the robot. The results informed further refinement of the prototype. An interactive, touch screen based, robot-mounted information tool can be developed to support healthcare needs of older people. Qualitative methods such as the hybrid GT-PD-AR approach may be particularly helpful for innovating and articulating design requirements in challenging situations.
Using Betweenness Centrality to Identify Manifold Shortcuts
Cukierski, William J.; Foran, David J.
2010-01-01
High-dimensional data presents a challenge to tasks of pattern recognition and machine learning. Dimensionality reduction (DR) methods remove the unwanted variance and make these tasks tractable. Several nonlinear DR methods, such as the well known ISOMAP algorithm, rely on a neighborhood graph to compute geodesic distances between data points. These graphs can contain unwanted edges which connect disparate regions of one or more manifolds. This topological sensitivity is well known [1], [2], [3], yet handling high-dimensional, noisy data in the absence of a priori manifold knowledge, remains an open and difficult problem. This work introduces a divisive, edge-removal method based on graph betweenness centrality which can robustly identify manifold-shorting edges. The problem of graph construction in high dimension is discussed and the proposed algorithm is fit into the ISOMAP workflow. ROC analysis is performed and the performance is tested on synthetic and real datasets. PMID:20607142
Do Athletes Excel at Everyday Tasks?
CHADDOCK, LAURA; NEIDER, MARK B.; VOSS, MICHELLE W.; GASPAR, JOHN G.; KRAMER, ARTHUR F.
2014-01-01
Purpose Cognitive enhancements are associated with sport training. We extended the sport-cognition literature by using a realistic street crossing task to examine the multitasking and processing speed abilities of collegiate athletes and nonathletes. Methods Pedestrians navigated trafficked roads by walking on a treadmill in a virtual world, a challenge that requires the quick and simultaneous processing of multiple streams of information. Results Athletes had higher street crossing success rates than nonathletes, as reflected by fewer collisions with moving vehicles. Athletes also showed faster processing speed on a computer-based test of simple reaction time, and shorter reaction times were associated with higher street crossing success rates. Conclusions The results suggest that participation in athletics relates to superior street crossing multitasking abilities and that athlete and nonathlete differences in processing speed may underlie this difference. We suggest that cognitive skills trained in sport may transfer to performance on everyday fast-paced multitasking abilities. PMID:21407125
Panda, Rashmi; Puhan, N B; Panda, Ganapati
2018-02-01
Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.
NASA Astrophysics Data System (ADS)
Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Montes, Matthieu
2018-01-01
The Drug Design Data Resource (D3R) Grand Challenges are blind contests organized to assess the state-of-the-art methods accuracy in predicting binding modes and relative binding free energies of experimentally validated ligands for a given target. The second stage of the D3R Grand Challenge 2 (GC2) was focused on ranking 102 compounds according to their predicted affinity for Farnesoid X Receptor. In this task, our workflow was ranked 5th out of the 77 submissions in the structure-based category. Our strategy consisted in (1) a combination of molecular docking using AutoDock 4.2 and manual edition of available structures for binding poses generation using SeeSAR, (2) the use of HYDE scoring for pose selection, and (3) a hierarchical ranking using HYDE and MM/GBSA. In this report, we detail our pose generation and ligands ranking protocols and provide guidelines to be used in a prospective computer aided drug design program.
Modelling Glacial Lake Outburst Floods: Key Considerations and Challenges Posed By Climatic Change
NASA Astrophysics Data System (ADS)
Westoby, M.
2014-12-01
The number and size of moraine-dammed supraglacial and proglacial lakes is increasing as a result of contemporary climatic change. Moraine-dammed lakes are capable of impounding volumes of water in excess of 107 m3, and often represent a very real threat to downstream communities and infrastructure, should the bounding moraine fail and produce a catastrophic Glacial Lake Outburst Flood (GLOF). Modelling the individual components of a GLOF, including a triggering event, the complex dam-breaching process and downstream propagation of the flood is incredibly challenging, not least because direct observation and instrumentation of such high-magnitude flows is virtually impossible. We briefly review the current state-of-the-art in numerical GLOF modelling, with a focus on the theoretical and computational challenges associated with reconstructing or predicting GLOF dynamics in the face of rates of cryospheric change that have no historical precedent, as well as various implications for researchers and professionals tasked with the production of hazard maps and disaster mitigation strategies.
Forecasting the Occurrence of Severe Haze Events in Asia using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Walton, A. L.
2016-12-01
Particulate pollution has become a serious environmental issue of many Asian countries in recent decades, threatening human health and frequently causing low visibility or haze days that interrupt from working, outdoor, and school activities to air, road, and sea transportation. To ultimately prevent such severe haze to occur requires many difficult tasks to be accomplished, dealing with trade and negotiation, emission control, energy consumption, transportation, land and plantation management, among other, of all involved countries or parties. Whereas, before these difficult measures could finally take place, it would be more practical to reduce the economic loss by developing skills to predict the occurrence of such events in reasonable accuracy so that effective mitigation or adaptation measures could be implemented ahead of time. The "traditional" numerical models developed based on fluid dynamics and explicit or parameterized representations of physiochemical processes can be certainly used for this task. However, the significant and sophisticated spatiotemporal variabilities associated with these events, the propagation of numerical or parameterization errors through model integration, and the computational demand all pose serious challenges to the practice of using these models to accomplish this interdisciplinary task. On the other hand, large quantity of meteorological, hydrological, atmospheric aerosol and composition, and surface visibility data from in-situ observation, reanalysis, or satellite retrievals, have become available to the community. These data might still not sufficient for evaluating and improving certain important aspects of the "traditional" models. Nevertheless, it is likely that these data can already support the effort to develop alternative "task-oriented" and computationally efficient forecasting skill using deep machine learning technique to avoid directly dealing with the sophisticated interplays across multiple process layers. I will present an experiential case of applying machine learning technique to predict the occurrence of severe haze events in Asia.
Forecasting the Occurrence of Severe Haze Events in Asia using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Wang, C.
2017-12-01
Particulate pollution has become a serious environmental issue of many Asian countries in recent decades, threatening human health and frequently causing low visibility or haze days that interrupt from working, outdoor, and school activities to air, road, and sea transportation. To ultimately prevent such severe haze to occur requires many difficult tasks to be accomplished, dealing with trade and negotiation, emission control, energy consumption, transportation, land and plantation management, among other, of all involved countries or parties. Whereas, before these difficult measures could finally take place, it would be more practical to reduce the economic loss by developing skills to predict the occurrence of such events in reasonable accuracy so that effective mitigation or adaptation measures could be implemented ahead of time. The "traditional" numerical models developed based on fluid dynamics and explicit or parameterized representations of physiochemical processes can be certainly used for this task. However, the significant and sophisticated spatiotemporal variabilities associated with these events, the propagation of numerical or parameterization errors through model integration, and the computational demand all pose serious challenges to the practice of using these models to accomplish this interdisciplinary task. On the other hand, large quantity of meteorological, hydrological, atmospheric aerosol and composition, and surface visibility data from in-situ observation, reanalysis, or satellite retrievals, have become available to the community. These data might still not sufficient for evaluating and improving certain important aspects of the "traditional" models. Nevertheless, it is likely that these data can already support the effort to develop alternative "task-oriented" and computationally efficient forecasting skill using deep machine learning technique to avoid directly dealing with the sophisticated interplays across multiple process layers. I will present an experiential case of applying machine learning technique to predict the occurrence of severe haze events in Asia.
ERIC Educational Resources Information Center
Tang, Thomas Li-Ping
Goal-setting literature has suggested that specific, difficult goals will produce higher performance levels than easy goals. A difficult task or one with negative performance feedback may increase an individual's perceived challenge of the task which may in turn enhance his motivation. Effects of the Protestant work ethic and perceived challenge…
Big Data, Deep Learning and Tianhe-2 at Sun Yat-Sen University, Guangzhou
NASA Astrophysics Data System (ADS)
Yuen, D. A.; Dzwinel, W.; Liu, J.; Zhang, K.
2014-12-01
In this decade the big data revolution has permeated in many fields, ranging from financial transactions, medical surveys and scientific endeavors, because of the big opportunities people see ahead. What to do with all this data remains an intriguing question. This is where computer scientists together with applied mathematicians have made some significant inroads in developing deep learning techniques for unraveling new relationships among the different variables by means of correlation analysis and data-assimilation methods. Deep-learning and big data taken together is a grand challenge task in High-performance computing which demand both ultrafast speed and large memory. The Tianhe-2 recently installed at Sun Yat-Sen University in Guangzhou is well positioned to take up this challenge because it is currently the world's fastest computer at 34 Petaflops. Each compute node of Tianhe-2 has two CPUs of Intel Xeon E5-2600 and three Xeon Phi accelerators. The Tianhe-2 has a very large fast memory RAM of 88 Gigabytes on each node. The system has a total memory of 1,375 Terabytes. All of these technical features will allow very high dimensional (more than 10) problem in deep learning to be explored carefully on the Tianhe-2. Problems in seismology which can be solved include three-dimensional seismic wave simulations of the whole Earth with a few km resolution and the recognition of new phases in seismic wave form from assemblage of large data sets.
Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei
2017-12-01
As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.
Improving Cognitive Skills of the Industrial Robot
NASA Astrophysics Data System (ADS)
Bezák, Pavol
2015-08-01
At present, there are plenty of industrial robots that are programmed to do the same repetitive task all the time. Industrial robots doing such kind of job are not able to understand whether the action is correct, effective or good. Object detection, manipulation and grasping is challenging due to the hand and object modeling uncertainties, unknown contact type and object stiffness properties. In this paper, the proposal of an intelligent humanoid hand object detection and grasping model is presented assuming that the object properties are known. The control is simulated in the Matlab Simulink/ SimMechanics, Neural Network Toolbox and Computer Vision System Toolbox.
Dynamic belief state representations.
Lee, Daniel D; Ortega, Pedro A; Stocker, Alan A
2014-04-01
Perceptual and control systems are tasked with the challenge of accurately and efficiently estimating the dynamic states of objects in the environment. To properly account for uncertainty, it is necessary to maintain a dynamical belief state representation rather than a single state vector. In this review, canonical algorithms for computing and updating belief states in robotic applications are delineated, and connections to biological systems are highlighted. A navigation example is used to illustrate the importance of properly accounting for correlations between belief state components, and to motivate the need for further investigations in psychophysics and neurobiology. Copyright © 2014 Elsevier Ltd. All rights reserved.
Stress Reactivity of Six-Year-Old Children Involved in Challenging Tasks
ERIC Educational Resources Information Center
Sajaniemi, Nina; Suhonen, Eira; Kontu, Elina; Lindholm, Harri; Hirvonen, Ari
2012-01-01
The aim of this study was to investigate whether the preschool activities challenge the stress regulative system in children. We used a multi-system approach to evaluate the underlying processes of stress responses and measured both cortisol and [alpha]-amylase responses after emotionally and cognitively challenging tasks followed by a recovery…
Information Management For Tactical Reconnaissance
NASA Astrophysics Data System (ADS)
White, James P.
1984-12-01
The expected battlefield tactics of the 1980's and 1990's will be fluid and dynamic. If tactical reconnaissance is to meet this challenge, it must explore all ways of accelerating the flow of information through the reconnaissance cycle, from the moment a tasking request is received to the time the mission results are delivered to the requestor. In addition to near real-time dissemination of reconnaissance information, the mission planning phase needs to be more responsive to the rapidly changing battlefield scenario. By introducing Artificial Intelligence (AI) via an expert system to the mission planning phase, repetitive and computational tasks can be more readily performed by the ground-based mission planning system, thereby permitting the aircrew to devote more of their time to target study. Transporting the flight plan, plus other mission data, to the aircraft is simple with the Fairchild Data Transfer Equipment (DTE). Aircrews are relieved of the tedious, error-prone, and time-consuming task of manually keying-in avionics initialization data. Post-flight retrieval of mission data via the DTE will permit follow-on aircrews, just starting their mission planning phase, to capitalize on current threat data collected by the returning aircrew. Maintenance data retrieved from the recently flown mission will speed-up the aircraft turn-around by providing near-real time fault detection/isolation. As future avionics systems demand more information, a need for a computer-controlled, smart data base or expert system on-board the aircraft will emerge.
NASA Astrophysics Data System (ADS)
Dutta, Sandeep; Gros, Eric
2018-03-01
Deep Learning (DL) has been successfully applied in numerous fields fueled by increasing computational power and access to data. However, for medical imaging tasks, limited training set size is a common challenge when applying DL. This paper explores the applicability of DL to the task of classifying a single axial slice from a CT exam into one of six anatomy regions. A total of 29000 images selected from 223 CT exams were manually labeled for ground truth. An additional 54 exams were labeled and used as an independent test set. The network architecture developed for this application is composed of 6 convolutional layers and 2 fully connected layers with RELU non-linear activations between each layer. Max-pooling was used after every second convolutional layer, and a softmax layer was used at the end. Given this base architecture, the effect of inclusion of network architecture components such as Dropout and Batch Normalization on network performance and training is explored. The network performance as a function of training and validation set size is characterized by training each network architecture variation using 5,10,20,40,50 and 100% of the available training data. The performance comparison of the various network architectures was done for anatomy classification as well as two computer vision datasets. The anatomy classifier accuracy varied from 74.1% to 92.3% in this study depending on the training size and network layout used. Dropout layers improved the model accuracy for all training sizes.
Guo, Chao-Yu; Bacic, Janine; Chan, Eugenia
2011-01-01
Background Health care systems increasingly rely on patients’ data entry efforts to organize and assist in care delivery through health information exchange. Objectives We sought to determine (1) the variation in burden imposed on parents by data entry efforts across paper-based and computer-based environments, and (2) the impact, if any, of parents’ health literacy on the task burden. Methods We completed a randomized controlled trial of parent-completed data entry tasks. Parents of children with attention deficit hyperactivity disorder (ADHD) were randomized based on the Test of Functional Health Literacy in Adults (TOFHLA) to either a paper-based or computer-based environment for entry of health information on their children. The primary outcome was the National Aeronautics and Space Administration Task Load Index (TLX) total weighted score. Results We screened 271 parents: 194 (71.6%) were eligible, and 180 of these (92.8%) constituted the study cohort. We analyzed 90 participants from each arm. Parents who completed information tasks on paper reported a higher task burden than those who worked in the computer environment: mean (SD) TLX scores were 22.8 (20.6) for paper and 16.3 (16.1) for computer. Assignment to the paper environment conferred a significant risk of higher task burden (F1,178 = 4.05, P = .046). Adequate literacy was associated with lower task burden (decrease in burden score of 1.15 SD, P = .003). After adjusting for relevant child and parent factors, parents’ TOFHLA score (beta = -.02, P = .02) and task environment (beta = .31, P = .03) remained significantly associated with task burden. Conclusions A tailored computer-based environment provided an improved task experience for data entry compared to the same tasks completed on paper. Health literacy was inversely related to task burden. Trial registration Clinicaltrials.gov NCT00543257; http://www.clinicaltrials.gov/ct2/show/NCT00543257 (Archived by WebCite at http://www.webcitation.org/5vUVH2DYR) PMID:21269990
Learning-based stochastic object models for characterizing anatomical variations
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua
2018-03-01
It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
Study to design and develop remote manipulator system. [computer simulation of human performance
NASA Technical Reports Server (NTRS)
Hill, J. W.; Mcgovern, D. E.; Sword, A. J.
1974-01-01
Modeling of human performance in remote manipulation tasks is reported by automated procedures using computers to analyze and count motions during a manipulation task. Performance is monitored by an on-line computer capable of measuring the joint angles of both master and slave and in some cases the trajectory and velocity of the hand itself. In this way the operator's strategies with different transmission delays, displays, tasks, and manipulators can be analyzed in detail for comparison. Some progress is described in obtaining a set of standard tasks and difficulty measures for evaluating manipulator performance.
ERIC Educational Resources Information Center
Sins, Patrick H. M.; van Joolingen, Wouter R.; Savelsbergh, Elwin R.; van Hout-Wolters, Bernadette
2008-01-01
Purpose of the present study was to test a conceptual model of relations among achievement goal orientation, self-efficacy, cognitive processing, and achievement of students working within a particular collaborative task context. The task involved a collaborative computer-based modeling task. In order to test the model, group measures of…
Computational imaging of light in flight
NASA Astrophysics Data System (ADS)
Hullin, Matthias B.
2014-10-01
Many computer vision tasks are hindered by image formation itself, a process that is governed by the so-called plenoptic integral. By averaging light falling into the lens over space, angle, wavelength and time, a great deal of information is irreversibly lost. The emerging idea of transient imaging operates on a time resolution fast enough to resolve non-stationary light distributions in real-world scenes. It enables the discrimination of light contributions by the optical path length from light source to receiver, a dimension unavailable in mainstream imaging to date. Until recently, such measurements used to require high-end optical equipment and could only be acquired under extremely restricted lab conditions. To address this challenge, we introduced a family of computational imaging techniques operating on standard time-of-flight image sensors, for the first time allowing the user to "film" light in flight in an affordable, practical and portable way. Just as impulse responses have proven a valuable tool in almost every branch of science and engineering, we expect light-in-flight analysis to impact a wide variety of applications in computer vision and beyond.
Phan, Philippe; Mezghani, Neila; Aubin, Carl-Éric; de Guise, Jacques A; Labelle, Hubert
2011-07-01
Adolescent idiopathic scoliosis (AIS) is a complex spinal deformity whose assessment and treatment present many challenges. Computer applications have been developed to assist clinicians. A literature review on computer applications used in AIS evaluation and treatment has been undertaken. The algorithms used, their accuracy and clinical usability were analyzed. Computer applications have been used to create new classifications for AIS based on 2D and 3D features, assess scoliosis severity or risk of progression and assist bracing and surgical treatment. It was found that classification accuracy could be improved using computer algorithms that AIS patient follow-up and screening could be done using surface topography thereby limiting radiation and that bracing and surgical treatment could be optimized using simulations. Yet few computer applications are routinely used in clinics. With the development of 3D imaging and databases, huge amounts of clinical and geometrical data need to be taken into consideration when researching and managing AIS. Computer applications based on advanced algorithms will be able to handle tasks that could otherwise not be done which can possibly improve AIS patients' management. Clinically oriented applications and evidence that they can improve current care will be required for their integration in the clinical setting.
Modeling Cognitive Strategies during Complex Task Performing Process
ERIC Educational Resources Information Center
Mazman, Sacide Guzin; Altun, Arif
2012-01-01
The purpose of this study is to examine individuals' computer based complex task performing processes and strategies in order to determine the reasons of failure by cognitive task analysis method and cued retrospective think aloud with eye movement data. Study group was five senior students from Computer Education and Instructional Technologies…
Automated Instructional Monitors for Complex Operational Tasks. Final Report.
ERIC Educational Resources Information Center
Feurzeig, Wallace
A computer-based instructional system is described which incorporates diagnosis of students difficulties in acquiring complex concepts and skills. A computer automatically generated a simulated display. It then monitored and analyzed a student's work in the performance of assigned training tasks. Two major tasks were studied. The first,…
An Unified Multiscale Framework for Planar, Surface, and Curve Skeletonization.
Jalba, Andrei C; Sobiecki, Andre; Telea, Alexandru C
2016-01-01
Computing skeletons of 2D shapes, and medial surface and curve skeletons of 3D shapes, is a challenging task. In particular, there is no unified framework that detects all types of skeletons using a single model, and also produces a multiscale representation which allows to progressively simplify, or regularize, all skeleton types. In this paper, we present such a framework. We model skeleton detection and regularization by a conservative mass transport process from a shape's boundary to its surface skeleton, next to its curve skeleton, and finally to the shape center. The resulting density field can be thresholded to obtain a multiscale representation of progressively simplified surface, or curve, skeletons. We detail a numerical implementation of our framework which is demonstrably stable and has high computational efficiency. We demonstrate our framework on several complex 2D and 3D shapes.
Horizontal decomposition of data table for finding one reduct
NASA Astrophysics Data System (ADS)
Hońko, Piotr
2018-04-01
Attribute reduction, being one of the most essential tasks in rough set theory, is a challenge for data that does not fit in the available memory. This paper proposes new definitions of attribute reduction using horizontal data decomposition. Algorithms for computing superreduct and subsequently exact reducts of a data table are developed and experimentally verified. In the proposed approach, the size of subtables obtained during the decomposition can be arbitrarily small. Reducts of the subtables are computed independently from one another using any heuristic method for finding one reduct. Compared with standard attribute reduction methods, the proposed approach can produce superreducts that usually inconsiderably differ from an exact reduct. The approach needs comparable time and much less memory to reduce the attribute set. The method proposed for removing unnecessary attributes from superreducts executes relatively fast for bigger databases.
Conceptual design of distillation-based hybrid separation processes.
Skiborowski, Mirko; Harwardt, Andreas; Marquardt, Wolfgang
2013-01-01
Hybrid separation processes combine different separation principles and constitute a promising design option for the separation of complex mixtures. Particularly, the integration of distillation with other unit operations can significantly improve the separation of close-boiling or azeotropic mixtures. Although the design of single-unit operations is well understood and supported by computational methods, the optimal design of flowsheets of hybrid separation processes is still a challenging task. The large number of operational and design degrees of freedom requires a systematic and optimization-based design approach. To this end, a structured approach, the so-called process synthesis framework, is proposed. This article reviews available computational methods for the conceptual design of distillation-based hybrid processes for the separation of liquid mixtures. Open problems are identified that must be addressed to finally establish a structured process synthesis framework for such processes.
McFarland, Dennis J; Krusienski, Dean J; Wolpaw, Jonathan R
2006-01-01
The Wadsworth brain-computer interface (BCI), based on mu and beta sensorimotor rhythms, uses one- and two-dimensional cursor movement tasks and relies on user training. This is a real-time closed-loop system. Signal processing consists of channel selection, spatial filtering, and spectral analysis. Feature translation uses a regression approach and normalization. Adaptation occurs at several points in this process on the basis of different criteria and methods. It can use either feedforward (e.g., estimating the signal mean for normalization) or feedback control (e.g., estimating feature weights for the prediction equation). We view this process as the interaction between a dynamic user and a dynamic system that coadapt over time. Understanding the dynamics of this interaction and optimizing its performance represent a major challenge for BCI research.
Whittaker, Rachel L; Park, Woojin; Dickerson, Clark R
2018-04-27
Efficient and holistic identification of fatigue-induced movement strategies can be limited by large between-subject variability in descriptors of joint angle data. One promising alternative to traditional, or computationally intensive methods is the symbolic motion structure representation algorithm (SMSR), which identifies the basic spatial-temporal structure of joint angle data using string descriptors of temporal joint angle trajectories. This study attempted to use the SMSR to identify changes in upper extremity time series joint angle data during a repetitive goal directed task causing muscle fatigue. Twenty-eight participants (15 M, 13 F) performed a seated repetitive task until fatigued. Upper extremity joint angles were extracted from motion capture for representative task cycles. SMSRs, averages and ranges of several joint angles were compared at the start and end of the repetitive task to identify kinematic changes with fatigue. At the group level, significant increases in the range of all joint angle data existed with large between-subject variability that posed a challenge to the interpretation of these fatigue-related changes. However, changes in the SMSRs across participants effectively summarized the adoption of adaptive movement strategies. This establishes SMSR as a viable, logical, and sensitive method of fatigue identification via kinematic changes, with novel application and pragmatism for visual assessment of fatigue development. Copyright © 2018 Elsevier Ltd. All rights reserved.
High Performance Descriptive Semantic Analysis of Semantic Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprisingmore » computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.« less
Optimizing a mobile robot control system using GPU acceleration
NASA Astrophysics Data System (ADS)
Tuck, Nat; McGuinness, Michael; Martin, Fred
2012-01-01
This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.
Instrumentino: An Open-Source Software for Scientific Instruments.
Koenka, Israel Joel; Sáiz, Jorge; Hauser, Peter C
2015-01-01
Scientists often need to build dedicated computer-controlled experimental systems. For this purpose, it is becoming common to employ open-source microcontroller platforms, such as the Arduino. These boards and associated integrated software development environments provide affordable yet powerful solutions for the implementation of hardware control of transducers and acquisition of signals from detectors and sensors. It is, however, a challenge to write programs that allow interactive use of such arrangements from a personal computer. This task is particularly complex if some of the included hardware components are connected directly to the computer and not via the microcontroller. A graphical user interface framework, Instrumentino, was therefore developed to allow the creation of control programs for complex systems with minimal programming effort. By writing a single code file, a powerful custom user interface is generated, which enables the automatic running of elaborate operation sequences and observation of acquired experimental data in real time. The framework, which is written in Python, allows extension by users, and is made available as an open source project.
Efficient Parallelization of a Dynamic Unstructured Application on the Tera MTA
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak
1999-01-01
The success of parallel computing in solving real-life computationally-intensive problems relies on their efficient mapping and execution on large-scale multiprocessor architectures. Many important applications are both unstructured and dynamic in nature, making their efficient parallel implementation a daunting task. This paper presents the parallelization of a dynamic unstructured mesh adaptation algorithm using three popular programming paradigms on three leading supercomputers. We examine an MPI message-passing implementation on the Cray T3E and the SGI Origin2OOO, a shared-memory implementation using cache coherent nonuniform memory access (CC-NUMA) of the Origin2OOO, and a multi-threaded version on the newly-released Tera Multi-threaded Architecture (MTA). We compare several critical factors of this parallel code development, including runtime, scalability, programmability, and memory overhead. Our overall results demonstrate that multi-threaded systems offer tremendous potential for quickly and efficiently solving some of the most challenging real-life problems on parallel computers.
MIGS-GPU: Microarray Image Gridding and Segmentation on the GPU.
Katsigiannis, Stamos; Zacharia, Eleni; Maroulis, Dimitris
2017-05-01
Complementary DNA (cDNA) microarray is a powerful tool for simultaneously studying the expression level of thousands of genes. Nevertheless, the analysis of microarray images remains an arduous and challenging task due to the poor quality of the images that often suffer from noise, artifacts, and uneven background. In this study, the MIGS-GPU [Microarray Image Gridding and Segmentation on Graphics Processing Unit (GPU)] software for gridding and segmenting microarray images is presented. MIGS-GPU's computations are performed on the GPU by means of the compute unified device architecture (CUDA) in order to achieve fast performance and increase the utilization of available system resources. Evaluation on both real and synthetic cDNA microarray images showed that MIGS-GPU provides better performance than state-of-the-art alternatives, while the proposed GPU implementation achieves significantly lower computational times compared to the respective CPU approaches. Consequently, MIGS-GPU can be an advantageous and useful tool for biomedical laboratories, offering a user-friendly interface that requires minimum input in order to run.
Use of Parallel Micro-Platform for the Simulation the Space Exploration
NASA Astrophysics Data System (ADS)
Velasco Herrera, Victor Manuel; Velasco Herrera, Graciela; Rosano, Felipe Lara; Rodriguez Lozano, Salvador; Lucero Roldan Serrato, Karen
The purpose of this work is to create a parallel micro-platform, that simulates the virtual movements of a space exploration in 3D. One of the innovations presented in this design consists of the application of a lever mechanism for the transmission of the movement. The development of such a robot is a challenging task very different of the industrial manipulators due to a totally different target system of requirements. This work presents the study and simulation, aided by computer, of the movement of this parallel manipulator. The development of this model has been developed using the platform of computer aided design Unigraphics, in which it was done the geometric modeled of each one of the components and end assembly (CAD), the generation of files for the computer aided manufacture (CAM) of each one of the pieces and the kinematics simulation of the system evaluating different driving schemes. We used the toolbox (MATLAB) of aerospace and create an adaptive control module to simulate the system.
Scherer, Reinhold; Faller, Josef; Friedrich, Elisabeth V C; Opisso, Eloy; Costa, Ursula; Kübler, Andrea; Müller-Putz, Gernot R
2015-01-01
Brain-computer interfaces (BCIs) translate oscillatory electroencephalogram (EEG) patterns into action. Different mental activities modulate spontaneous EEG rhythms in various ways. Non-stationarity and inherent variability of EEG signals, however, make reliable recognition of modulated EEG patterns challenging. Able-bodied individuals who use a BCI for the first time achieve - on average - binary classification performance of about 75%. Performance in users with central nervous system (CNS) tissue damage is typically lower. User training generally enhances reliability of EEG pattern generation and thus also robustness of pattern recognition. In this study, we investigated the impact of mental tasks on binary classification performance in BCI users with central nervous system (CNS) tissue damage such as persons with stroke or spinal cord injury (SCI). Motor imagery (MI), that is the kinesthetic imagination of movement (e.g. squeezing a rubber ball with the right hand), is the "gold standard" and mainly used to modulate EEG patterns. Based on our recent results in able-bodied users, we hypothesized that pair-wise combination of "brain-teaser" (e.g. mental subtraction and mental word association) and "dynamic imagery" (e.g. hand and feet MI) tasks significantly increases classification performance of induced EEG patterns in the selected end-user group. Within-day (How stable is the classification within a day?) and between-day (How well does a model trained on day one perform on unseen data of day two?) analysis of variability of mental task pair classification in nine individuals confirmed the hypothesis. We found that the use of the classical MI task pair hand vs. feed leads to significantly lower classification accuracy - in average up to 15% less - in most users with stroke or SCI. User-specific selection of task pairs was again essential to enhance performance. We expect that the gained evidence will significantly contribute to make imagery-based BCI technology become accessible to a larger population of users including individuals with special needs due to CNS damage.
Scherer, Reinhold; Faller, Josef; Friedrich, Elisabeth V. C.; Opisso, Eloy; Costa, Ursula; Kübler, Andrea; Müller-Putz, Gernot R.
2015-01-01
Brain-computer interfaces (BCIs) translate oscillatory electroencephalogram (EEG) patterns into action. Different mental activities modulate spontaneous EEG rhythms in various ways. Non-stationarity and inherent variability of EEG signals, however, make reliable recognition of modulated EEG patterns challenging. Able-bodied individuals who use a BCI for the first time achieve - on average - binary classification performance of about 75%. Performance in users with central nervous system (CNS) tissue damage is typically lower. User training generally enhances reliability of EEG pattern generation and thus also robustness of pattern recognition. In this study, we investigated the impact of mental tasks on binary classification performance in BCI users with central nervous system (CNS) tissue damage such as persons with stroke or spinal cord injury (SCI). Motor imagery (MI), that is the kinesthetic imagination of movement (e.g. squeezing a rubber ball with the right hand), is the "gold standard" and mainly used to modulate EEG patterns. Based on our recent results in able-bodied users, we hypothesized that pair-wise combination of "brain-teaser" (e.g. mental subtraction and mental word association) and "dynamic imagery" (e.g. hand and feet MI) tasks significantly increases classification performance of induced EEG patterns in the selected end-user group. Within-day (How stable is the classification within a day?) and between-day (How well does a model trained on day one perform on unseen data of day two?) analysis of variability of mental task pair classification in nine individuals confirmed the hypothesis. We found that the use of the classical MI task pair hand vs. feed leads to significantly lower classification accuracy - in average up to 15% less - in most users with stroke or SCI. User-specific selection of task pairs was again essential to enhance performance. We expect that the gained evidence will significantly contribute to make imagery-based BCI technology become accessible to a larger population of users including individuals with special needs due to CNS damage. PMID:25992718
Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI
Donato, David I.
2017-01-01
In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Pan, X; Stayman, J
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less
Global models: Robot sensing, control, and sensory-motor skills
NASA Technical Reports Server (NTRS)
Schenker, Paul S.
1989-01-01
Robotics research has begun to address the modeling and implementation of a wide variety of unstructured tasks. Examples include automated navigation, platform servicing, custom fabrication and repair, deployment and recovery, and science exploration. Such tasks are poorly described at onset; the workspace layout is partially unfamiliar, and the task control sequence is only qualitatively characterized. The robot must model the workspace, plan detailed physical actions from qualitative goals, and adapt its instantaneous control regimes to unpredicted events. Developing robust representations and computational approaches for these sensing, planning, and control functions is a major challenge. The underlying domain constraints are very general, and seem to offer little guidance for well-bounded approximation of object shape and motion, manipulation postures and trajectories, and the like. This generalized modeling problem is discussed, with an emphasis on the role of sensing. It is also discussed that unstructured tasks often have, in fact, a high degree of underlying physical symmetry, and such implicit knowledge should be drawn on to model task performance strategies in a methodological fashion. A group-theoretic decomposition of the workspace organization, task goals, and their admissible interactions are proposed. This group-mechanical approach to task representation helps to clarify the functional interplay of perception and control, in essence, describing what perception is specifically for, versus how it is generically modeled. One also gains insight how perception might logically evolve in response to needs of more complex motor skills. It is discussed why, of the many solutions that are often mathematically admissible to a given sensory motor-coordination problem, one may be preferred over others.
Parallel processing using an optical delay-based reservoir computer
NASA Astrophysics Data System (ADS)
Van der Sande, Guy; Nguimdo, Romain Modeste; Verschaffelt, Guy
2016-04-01
Delay systems subject to delayed optical feedback have recently shown great potential in solving computationally hard tasks. By implementing a neuro-inspired computational scheme relying on the transient response to optical data injection, high processing speeds have been demonstrated. However, reservoir computing systems based on delay dynamics discussed in the literature are designed by coupling many different stand-alone components which lead to bulky, lack of long-term stability, non-monolithic systems. Here we numerically investigate the possibility of implementing reservoir computing schemes based on semiconductor ring lasers. Semiconductor ring lasers are semiconductor lasers where the laser cavity consists of a ring-shaped waveguide. SRLs are highly integrable and scalable, making them ideal candidates for key components in photonic integrated circuits. SRLs can generate light in two counterpropagating directions between which bistability has been demonstrated. We demonstrate that two independent machine learning tasks , even with different nature of inputs with different input data signals can be simultaneously computed using a single photonic nonlinear node relying on the parallelism offered by photonics. We illustrate the performance on simultaneous chaotic time series prediction and a classification of the Nonlinear Channel Equalization. We take advantage of different directional modes to process individual tasks. Each directional mode processes one individual task to mitigate possible crosstalk between the tasks. Our results indicate that prediction/classification with errors comparable to the state-of-the-art performance can be obtained even with noise despite the two tasks being computed simultaneously. We also find that a good performance is obtained for both tasks for a broad range of the parameters. The results are discussed in detail in [Nguimdo et al., IEEE Trans. Neural Netw. Learn. Syst. 26, pp. 3301-3307, 2015
Psychology of computer use: XXXII. Computer screen-savers as distractors.
Volk, F A; Halcomb, C G
1994-12-01
The differences in performance of 16 male and 16 female undergraduates on three cognitive tasks were investigated in the presence of visual distractors (computer-generated dynamic graphic images). These tasks included skilled and unskilled proofreading and listening comprehension. The visually demanding task of proofreading (skilled and unskilled) showed no significant decreases in performance in the distractor conditions. Results showed significant decrements, however, in performance on listening comprehension in at least one of the distractor conditions.
Interaction design challenges and solutions for ALMA operations monitoring and control
NASA Astrophysics Data System (ADS)
Pietriga, Emmanuel; Cubaud, Pierre; Schwarz, Joseph; Primet, Romain; Schilling, Marcus; Barkats, Denis; Barrios, Emilio; Vila Vilaro, Baltasar
2012-09-01
The ALMA radio-telescope, currently under construction in northern Chile, is a very advanced instrument that presents numerous challenges. From a software perspective, one critical issue is the design of graphical user interfaces for operations monitoring and control that scale to the complexity of the system and to the massive amounts of data users are faced with. Early experience operating the telescope with only a few antennas has shown that conventional user interface technologies are not adequate in this context. They consume too much screen real-estate, require many unnecessary interactions to access relevant information, and fail to provide operators and astronomers with a clear mental map of the instrument. They increase extraneous cognitive load, impeding tasks that call for quick diagnosis and action. To address this challenge, the ALMA software division adopted a user-centered design approach. For the last two years, astronomers, operators, software engineers and human-computer interaction researchers have been involved in participatory design workshops, with the aim of designing better user interfaces based on state-of-the-art visualization techniques. This paper describes the process that led to the development of those interface components and to a proposal for the science and operations console setup: brainstorming sessions, rapid prototyping, joint implementation work involving software engineers and human-computer interaction researchers, feedback collection from a broader range of users, further iterations and testing.
ClimateSpark: An in-memory distributed computing framework for big climate data analytics
NASA Astrophysics Data System (ADS)
Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei
2018-06-01
The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.
An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.
Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei
2017-12-01
Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.
Real-Time Non-Intrusive Assessment of Viewing Distance during Computer Use.
Argilés, Marc; Cardona, Genís; Pérez-Cabré, Elisabet; Pérez-Magrané, Ramon; Morcego, Bernardo; Gispets, Joan
2016-12-01
To develop and test the sensitivity of an ultrasound-based sensor to assess the viewing distance of visual display terminals operators in real-time conditions. A modified ultrasound sensor was attached to a computer display to assess viewing distance in real time. Sensor functionality was tested on a sample of 20 healthy participants while they conducted four 10-minute randomly presented typical computer tasks (a match-three puzzle game, a video documentary, a task requiring participants to complete a series of sentences, and a predefined internet search). The ultrasound sensor offered good measurement repeatability. Game, text completion, and web search tasks were conducted at shorter viewing distances (54.4 cm [95% CI 51.3-57.5 cm], 54.5 cm [95% CI 51.1-58.0 cm], and 54.5 cm [95% CI 51.4-57.7 cm], respectively) than the video task (62.3 cm [95% CI 58.9-65.7 cm]). Statistically significant differences were found between the video task and the other three tasks (all p < 0.05). Range of viewing distances (from 22 to 27 cm) was similar for all tasks (F = 0.996; p = 0.413). Real-time assessment of the viewing distance of computer users with a non-intrusive ultrasonic device disclosed a task-dependent pattern.
Szucs, Kimberly A; Molnar, Megan
2017-04-01
The aim of this study was to provide a description of gender differences of the activation patterns of the four subdivisions of the trapezius (clavicular, upper, middle, lower) following a 60min computer work task. Surface EMG was collected from these subdivisions from 21 healthy subjects during bilateral arm elevation pre-/post- task. Subjects completed a standardized 60min computer work task at a standard, ergonomic workstation. Normalized activation and activation ratios of each trapezius subdivision were compared between genders and condition with repeated measures ANOVAs. The interaction effect of Gender×Condition for upper trapezius% activation approached significance at p=0.051with males demonstrating greater activation post-task. The main effect of Condition was statistically significant for% activation of middle and lower trapezius (p<0.05), with both muscles demonstrating increase activation post-task. There was a statistically significant interaction effect of Gender×Condition for the Middle Trapezius/Upper Trapezius ratio and main effect of Condition for the Clavicular Trapezius/Upper Trapezius ratio, with a decreased ratio post-typing. Gender differences exist following 60min of a low force computer typing task. Imbalances in muscle activation and activation ratios following computer work may affect total shoulder kinematics and should be further explored. Copyright © 2017 Elsevier B.V. All rights reserved.
Schmidhuber, Jürgen
2013-01-01
Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013). PMID:23761771
Aviation Technician Training I and Task Analyses: Semester II. Field Review Copy.
ERIC Educational Resources Information Center
Upchurch, Richard
This guide for aviation technician training begins with a course description, resource information, and a course outline. Tasks/competencies are categorized into 16 concept/duty areas: understanding technical symbols and abbreviations; understanding mathematical terms, symbols, and formulas; computing decimals; computing fractions; computing ratio…
Fine-grained recognition of plants from images.
Šulc, Milan; Matas, Jiří
2017-01-01
Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition "in the wild". We propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition "in the wild". The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild" where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.
Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline
Zhang, Jie; Li, Qingyang; Caselli, Richard J.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin
2017-01-01
Alzheimer’s Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms. PMID:28943731
2013-01-01
Analyzing and storing data and results from next-generation sequencing (NGS) experiments is a challenging task, hampered by ever-increasing data volumes and frequent updates of analysis methods and tools. Storage and computation have grown beyond the capacity of personal computers and there is a need for suitable e-infrastructures for processing. Here we describe UPPNEX, an implementation of such an infrastructure, tailored to the needs of data storage and analysis of NGS data in Sweden serving various labs and multiple instruments from the major sequencing technology platforms. UPPNEX comprises resources for high-performance computing, large-scale and high-availability storage, an extensive bioinformatics software suite, up-to-date reference genomes and annotations, a support function with system and application experts as well as a web portal and support ticket system. UPPNEX applications are numerous and diverse, and include whole genome-, de novo- and exome sequencing, targeted resequencing, SNP discovery, RNASeq, and methylation analysis. There are over 300 projects that utilize UPPNEX and include large undertakings such as the sequencing of the flycatcher and Norwegian spruce. We describe the strategic decisions made when investing in hardware, setting up maintenance and support, allocating resources, and illustrate major challenges such as managing data growth. We conclude with summarizing our experiences and observations with UPPNEX to date, providing insights into the successful and less successful decisions made. PMID:23800020
A programmable two-qubit quantum processor in silicon
NASA Astrophysics Data System (ADS)
Watson, T. F.; Philips, S. G. J.; Kawakami, E.; Ward, D. R.; Scarlino, P.; Veldhorst, M.; Savage, D. E.; Lagally, M. G.; Friesen, Mark; Coppersmith, S. N.; Eriksson, M. A.; Vandersypen, L. M. K.
2018-03-01
Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch–Josza algorithm and the Grover search algorithm—canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85–89 per cent and concurrences of 73–82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.
A programmable two-qubit quantum processor in silicon.
Watson, T F; Philips, S G J; Kawakami, E; Ward, D R; Scarlino, P; Veldhorst, M; Savage, D E; Lagally, M G; Friesen, Mark; Coppersmith, S N; Eriksson, M A; Vandersypen, L M K
2018-03-29
Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch-Josza algorithm and the Grover search algorithm-canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85-89 per cent and concurrences of 73-82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.
A comparison of symptoms after viewing text on a computer screen and hardcopy.
Chu, Christina; Rosenfield, Mark; Portello, Joan K; Benzoni, Jaclyn A; Collier, Juanita D
2011-01-01
Computer vision syndrome (CVS) is a complex of eye and vision problems experienced during or related to computer use. Ocular symptoms may include asthenopia, accommodative and vergence difficulties and dry eye. CVS occurs in up to 90% of computer workers, and given the almost universal use of these devices, it is important to identify whether these symptoms are specific to computer operation, or are simply a manifestation of performing a sustained near-vision task. This study compared ocular symptoms immediately following a sustained near task. 30 young, visually-normal subjects read text aloud either from a desktop computer screen or a printed hardcopy page at a viewing distance of 50 cm for a continuous 20 min period. Identical text was used in the two sessions, which was matched for size and contrast. Target viewing angle and luminance were similar for the two conditions. Immediately following completion of the reading task, subjects completed a written questionnaire asking about their level of ocular discomfort during the task. When comparing the computer and hardcopy conditions, significant differences in median symptom scores were reported with regard to blurred vision during the task (t = 147.0; p = 0.03) and the mean symptom score (t = 102.5; p = 0.04). In both cases, symptoms were higher during computer use. Symptoms following sustained computer use were significantly worse than those reported after hard copy fixation under similar viewing conditions. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will allow practitioners to optimize visual comfort and efficiency during computer operation.
NASA Astrophysics Data System (ADS)
Kotulla, Ralf; Gopu, Arvind; Hayashi, Soichi
2016-08-01
Processing astronomical data to science readiness was and remains a challenge, in particular in the case of multi detector instruments such as wide-field imagers. One such instrument, the WIYN One Degree Imager, is available to the astronomical community at large, and, in order to be scientifically useful to its varied user community on a short timescale, provides its users fully calibrated data in addition to the underlying raw data. However, time-efficient re-processing of the often large datasets with improved calibration data and/or software requires more than just a large number of CPU-cores and disk space. This is particularly relevant if all computing resources are general purpose and shared with a large number of users in a typical university setup. Our approach to address this challenge is a flexible framework, combining the best of both high performance (large number of nodes, internal communication) and high throughput (flexible/variable number of nodes, no dedicated hardware) computing. Based on the Advanced Message Queuing Protocol, we a developed a Server-Manager- Worker framework. In addition to the server directing the work flow and the worker executing the actual work, the manager maintains a list of available worker, adds and/or removes individual workers from the worker pool, and re-assigns worker to different tasks. This provides the flexibility of optimizing the worker pool to the current task and workload, improves load balancing, and makes the most efficient use of the available resources. We present performance benchmarks and scaling tests, showing that, today and using existing, commodity shared- use hardware we can process data with data throughputs (including data reduction and calibration) approaching that expected in the early 2020s for future observatories such as the Large Synoptic Survey Telescope.
Hanauer, David A; Wu, Danny T Y; Yang, Lei; Mei, Qiaozhu; Murkowski-Steffy, Katherine B; Vydiswaran, V G Vinod; Zheng, Kai
2017-03-01
The utility of biomedical information retrieval environments can be severely limited when users lack expertise in constructing effective search queries. To address this issue, we developed a computer-based query recommendation algorithm that suggests semantically interchangeable terms based on an initial user-entered query. In this study, we assessed the value of this approach, which has broad applicability in biomedical information retrieval, by demonstrating its application as part of a search engine that facilitates retrieval of information from electronic health records (EHRs). The query recommendation algorithm utilizes MetaMap to identify medical concepts from search queries and indexed EHR documents. Synonym variants from UMLS are used to expand the concepts along with a synonym set curated from historical EHR search logs. The empirical study involved 33 clinicians and staff who evaluated the system through a set of simulated EHR search tasks. User acceptance was assessed using the widely used technology acceptance model. The search engine's performance was rated consistently higher with the query recommendation feature turned on vs. off. The relevance of computer-recommended search terms was also rated high, and in most cases the participants had not thought of these terms on their own. The questions on perceived usefulness and perceived ease of use received overwhelmingly positive responses. A vast majority of the participants wanted the query recommendation feature to be available to assist in their day-to-day EHR search tasks. Challenges persist for users to construct effective search queries when retrieving information from biomedical documents including those from EHRs. This study demonstrates that semantically-based query recommendation is a viable solution to addressing this challenge. Published by Elsevier Inc.
Skowron, Elizabeth A; Cipriano-Essel, Elizabeth; Gatzke-Kopp, Lisa M; Teti, Douglas M; Ammerman, Robert T
2014-07-01
This study examined parasympathetic physiology as a moderator of the effects of early adversity (i.e., child abuse and neglect) on children's inhibitory control. Children's respiratory sinus arrhythmia (RSA) was assessed during a resting baseline, two joint challenge tasks with mother, and an individual frustration task. RSA assessed during each of the joint parent-child challenge tasks moderated the effects of child maltreatment (CM) status on children's independently-assessed inhibitory control. No moderation effect was found for RSA assessed at baseline or in the child-alone challenge task. Among CM-exposed children, lower RSA levels during the joint task predicted the lowest inhibitory control, whereas higher joint task RSA was linked to higher inhibitory control scores that were indistinguishable from those of non-CM children. Results are discussed with regard to the importance of considering context specificity (i.e., individual and caregiver contexts) in how biomarkers inform our understanding of individual differences in vulnerability among at-risk children. © 2013 Wiley Periodicals, Inc.
Skowron, Elizabeth A.; Cipriano-Essel, Elizabeth; Gatzke-Kopp, Lisa M.; Teti, Douglas M.; Ammerman, Robert T.
2014-01-01
This study examined parasympathetic physiology as a moderator of the effects of early adversity (i.e., child abuse and neglect) on children’s inhibitory control. Children’s respiratory sinus arrhythmia (RSA) was assessed during a resting baseline, two joint challenge tasks with mother, and an individual frustration task. RSA assessed during each of the joint parent–child challenge tasks moderated the effects of child maltreatment (CM) status on children’s independently-assessed inhibitory control. No moderation effect was found for RSA assessed at baseline or in the child-alone challenge task. Among CM-exposed children, lower RSA levels during the joint task predicted the lowest inhibitory control, whereas higher joint task RSA was linked to higher inhibitory control scores that were indistinguishable from those of non-CM children. Results are discussed with regard to the importance of considering context specificity (i.e., individual and caregiver contexts) in how biomarkers inform our understanding of individual differences in vulnerability among at-risk children. PMID:24142832
Pattern of Non-Task Interactions in Asynchronous Computer-Supported Collaborative Learning Courses
ERIC Educational Resources Information Center
Abedin, Babak; Daneshgar, Farhad; D'Ambra, John
2014-01-01
Despite the importance of the non-task interactions in computer-supported collaborative learning (CSCL) environments as emphasized in the literature, few studies have investigated online behavior of people in the CSCL environments. This paper studies the pattern of non-task interactions among postgraduate students in an Australian university. The…
Strategy Generalization across Orientation Tasks: Testing a Computational Cognitive Model
ERIC Educational Resources Information Center
Gunzelmann, Glenn
2008-01-01
Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human…
ERIC Educational Resources Information Center
Collentine, Karina
2009-01-01
Second language acquisition (SLA) researchers strive to understand the language and exchanges that learners generate in synchronous computer-mediated communication (SCMC). Doughty and Long (2003) advocate replacing open-ended SCMC with task-based language teaching (TBLT) design principles. Since most task-based SCMC (TB-SCMC) research addresses an…
A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam
In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.
Secure Skyline Queries on Cloud Platform.
Liu, Jinfei; Yang, Juncheng; Xiong, Li; Pei, Jian
2017-04-01
Outsourcing data and computation to cloud server provides a cost-effective way to support large scale data storage and query processing. However, due to security and privacy concerns, sensitive data (e.g., medical records) need to be protected from the cloud server and other unauthorized users. One approach is to outsource encrypted data to the cloud server and have the cloud server perform query processing on the encrypted data only. It remains a challenging task to support various queries over encrypted data in a secure and efficient way such that the cloud server does not gain any knowledge about the data, query, and query result. In this paper, we study the problem of secure skyline queries over encrypted data. The skyline query is particularly important for multi-criteria decision making but also presents significant challenges due to its complex computations. We propose a fully secure skyline query protocol on data encrypted using semantically-secure encryption. As a key subroutine, we present a new secure dominance protocol, which can be also used as a building block for other queries. Finally, we provide both serial and parallelized implementations and empirically study the protocols in terms of efficiency and scalability under different parameter settings, verifying the feasibility of our proposed solutions.
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.
Challenging Postural Tasks Increase Asymmetry in Patients with Parkinson’s Disease
Beretta, Victor Spiandor; Gobbi, Lilian Teresa Bucken; Lirani-Silva, Ellen; Simieli, Lucas; Orcioli-Silva, Diego; Barbieri, Fabio Augusto
2015-01-01
The unilateral predominance of Parkinson’s disease (PD) symptoms suggests that balance control could be asymmetrical during static tasks. Although studies have shown that balance control asymmetries exist in patients with PD, these analyses were performed using only simple bipedal standing tasks. Challenging postural tasks, such as unipedal or tandem standing, could exacerbate balance control asymmetries. To address this, we studied the impact of challenging standing tasks on postural control asymmetry in patients with PD. Twenty patients with PD and twenty neurologically healthy individuals (control group) participated in this study. Participants performed three 30s trials for each postural task: bipedal, tandem adapted and unipedal standing. The center of pressure parameter was calculated for both limbs in each of these conditions, and the asymmetry between limbs was assessed using the symmetric index. A significant effect of condition was observed, with unipedal standing and tandem standing showing greater asymmetry than bipedal standing for the mediolateral root mean square (RMS) and area of sway parameters, respectively. In addition, a group*condition interaction indicated that, only for patients with PD, the unipedal condition showed greater asymmetry in the mediolateral RMS and area of sway than the bipedal condition and the tandem condition showed greater asymmetry in the area of sway than the bipedal condition. Patients with PD exhibited greater asymmetry while performing tasks requiring postural control when compared to neurologically healthy individuals, especially for challenging tasks such as tandem and unipedal standing. PMID:26367032
Fritz, Jonathan B; Elhilali, Mounya; David, Stephen V; Shamma, Shihab A
2007-07-01
Acoustic filter properties of A1 neurons can dynamically adapt to stimulus statistics, classical conditioning, instrumental learning and the changing auditory attentional focus. We have recently developed an experimental paradigm that allows us to view cortical receptive field plasticity on-line as the animal meets different behavioral challenges by attending to salient acoustic cues and changing its cortical filters to enhance performance. We propose that attention is the key trigger that initiates a cascade of events leading to the dynamic receptive field changes that we observe. In our paradigm, ferrets were initially trained, using conditioned avoidance training techniques, to discriminate between background noise stimuli (temporally orthogonal ripple combinations) and foreground tonal target stimuli. They learned to generalize the task for a wide variety of distinct background and foreground target stimuli. We recorded cortical activity in the awake behaving animal and computed on-line spectrotemporal receptive fields (STRFs) of single neurons in A1. We observed clear, predictable task-related changes in STRF shape while the animal performed spectral tasks (including single tone and multi-tone detection, and two-tone discrimination) with different tonal targets. A different set of task-related changes occurred when the animal performed temporal tasks (including gap detection and click-rate discrimination). Distinctive cortical STRF changes may constitute a "task-specific signature". These spectral and temporal changes in cortical filters occur quite rapidly, within 2min of task onset, and fade just as quickly after task completion, or in some cases, persisted for hours. The same cell could multiplex by differentially changing its receptive field in different task conditions. On-line dynamic task-related changes, as well as persistent plastic changes, were observed at a single-unit, multi-unit and population level. Auditory attention is likely to be pivotal in mediating these task-related changes since the magnitude of STRF changes correlated with behavioral performance on tasks with novel targets. Overall, these results suggest the presence of an attention-triggered plasticity algorithm in A1 that can swiftly change STRF shape by transforming receptive fields to enhance figure/ground separation, by using a contrast matched filter to filter out the background, while simultaneously enhancing the salient acoustic target in the foreground. These results favor the view of a nimble, dynamic, attentive and adaptive brain that can quickly reshape its sensory filter properties and sensori-motor links on a moment-to-moment basis, depending upon the current challenges the animal faces. In this review, we summarize our results in the context of a broader survey of the field of auditory attention, and then consider neuronal networks that could give rise to this phenomenon of attention-driven receptive field plasticity in A1.
Antonietti, Alberto; Casellato, Claudia; Garrido, Jesús A; Luque, Niceto R; Naveros, Francisco; Ros, Eduardo; D' Angelo, Egidio; Pedrocchi, Alessandra
2016-01-01
In this study, we defined a realistic cerebellar model through the use of artificial spiking neural networks, testing it in computational simulations that reproduce associative motor tasks in multiple sessions of acquisition and extinction. By evolutionary algorithms, we tuned the cerebellar microcircuit to find out the near-optimal plasticity mechanism parameters that better reproduced human-like behavior in eye blink classical conditioning, one of the most extensively studied paradigms related to the cerebellum. We used two models: one with only the cortical plasticity and another including two additional plasticity sites at nuclear level. First, both spiking cerebellar models were able to well reproduce the real human behaviors, in terms of both "timing" and "amplitude", expressing rapid acquisition, stable late acquisition, rapid extinction, and faster reacquisition of an associative motor task. Even though the model with only the cortical plasticity site showed good learning capabilities, the model with distributed plasticity produced faster and more stable acquisition of conditioned responses in the reacquisition phase. This behavior is explained by the effect of the nuclear plasticities, which have slow dynamics and can express memory consolidation and saving. We showed how the spiking dynamics of multiple interactive neural mechanisms implicitly drive multiple essential components of complex learning processes. This study presents a very advanced computational model, developed together by biomedical engineers, computer scientists, and neuroscientists. Since its realistic features, the proposed model can provide confirmations and suggestions about neurophysiological and pathological hypotheses and can be used in challenging clinical applications.
NASA Astrophysics Data System (ADS)
Xu, Kai; Wang, Yiwen; Wang, Yueming; Wang, Fang; Hao, Yaoyao; Zhang, Shaomin; Zhang, Qiaosheng; Chen, Weidong; Zheng, Xiaoxiang
2013-04-01
Objective. The high-dimensional neural recordings bring computational challenges to movement decoding in motor brain machine interfaces (mBMI), especially for portable applications. However, not all recorded neural activities relate to the execution of a certain movement task. This paper proposes to use a local-learning-based method to perform neuron selection for the gesture prediction in a reaching and grasping task. Approach. Nonlinear neural activities are decomposed into a set of linear ones in a weighted feature space. A margin is defined to measure the distance between inter-class and intra-class neural patterns. The weights, reflecting the importance of neurons, are obtained by minimizing a margin-based exponential error function. To find the most dominant neurons in the task, 1-norm regularization is introduced to the objective function for sparse weights, where near-zero weights indicate irrelevant neurons. Main results. The signals of only 10 neurons out of 70 selected by the proposed method could achieve over 95% of the full recording's decoding accuracy of gesture predictions, no matter which different decoding methods are used (support vector machine and K-nearest neighbor). The temporal activities of the selected neurons show visually distinguishable patterns associated with various hand states. Compared with other algorithms, the proposed method can better eliminate the irrelevant neurons with near-zero weights and provides the important neuron subset with the best decoding performance in statistics. The weights of important neurons converge usually within 10-20 iterations. In addition, we study the temporal and spatial variation of neuron importance along a period of one and a half months in the same task. A high decoding performance can be maintained by updating the neuron subset. Significance. The proposed algorithm effectively ascertains the neuronal importance without assuming any coding model and provides a high performance with different decoding models. It shows better robustness of identifying the important neurons with noisy signals presented. The low demand of computational resources which, reflected by the fast convergence, indicates the feasibility of the method applied in portable BMI systems. The ascertainment of the important neurons helps to inspect neural patterns visually associated with the movement task. The elimination of irrelevant neurons greatly reduces the computational burden of mBMI systems and maintains the performance with better robustness.
ERIC Educational Resources Information Center
Amiryousefi, Mohammad
2016-01-01
Previous task repetition studies have primarily focused on how task repetition characteristics affect the complexity, accuracy, and fluency in L2 oral production with little attention to L2 written production. The main purpose of the study reported in this paper was to examine the effects of task repetition versus procedural repetition on the…
Real-time imaging through strongly scattering media: seeing through turbid media, instantly
Sudarsanam, Sriram; Mathew, James; Panigrahi, Swapnesh; Fade, Julien; Alouini, Mehdi; Ramachandran, Hema
2016-01-01
Numerous everyday situations like navigation, medical imaging and rescue operations require viewing through optically inhomogeneous media. This is a challenging task as photons propagate predominantly diffusively (rather than ballistically) due to random multiple scattering off the inhomogenieties. Real-time imaging with ballistic light under continuous-wave illumination is even more challenging due to the extremely weak signal, necessitating voluminous data-processing. Here we report imaging through strongly scattering media in real-time and at rates several times the critical flicker frequency of the eye, so that motion is perceived as continuous. Two factors contributed to the speedup of more than three orders of magnitude over conventional techniques - the use of a simplified algorithm enabling processing of data on the fly, and the utilisation of task and data parallelization capabilities of typical desktop computers. The extreme simplicity of the technique, and its implementation with present day low-cost technology promises its utility in a variety of devices in maritime, aerospace, rail and road transport, in medical imaging and defence. It is of equal interest to the common man and adventure sportsperson like hikers, divers, mountaineers, who frequently encounter situations requiring realtime imaging through obscuring media. As a specific example, navigation under poor visibility is examined. PMID:27114106
Integration of active pauses and pattern of muscular activity during computer work.
St-Onge, Nancy; Samani, Afshin; Madeleine, Pascal
2017-09-01
Submaximal isometric muscle contractions have been reported to increase variability of muscle activation during computer work; however, other types of active contractions may be more beneficial. Our objective was to determine which type of active pause vs. rest is more efficient in changing muscle activity pattern during a computer task. Asymptomatic regular computer users performed a standardised 20-min computer task four times, integrating a different type of pause: sub-maximal isometric contraction, dynamic contraction, postural exercise and rest. Surface electromyographic (SEMG) activity was recorded bilaterally from five neck/shoulder muscles. Root-mean-square decreased with isometric pauses in the cervical paraspinals, upper trapezius and middle trapezius, whereas it increased with rest. Variability in the pattern of muscular activity was not affected by any type of pause. Overall, no detrimental effects on the level of SEMG during active pauses were found suggesting that they could be implemented without a cost on activation level or variability. Practitioner Summary: We aimed to determine which type of active pause vs. rest is best in changing muscle activity pattern during a computer task. Asymptomatic computer users performed a standardised computer task integrating different types of pauses. Muscle activation decreased with isometric pauses in neck/shoulder muscles, suggesting their implementation during computer work.
Kim, Sun; Chatr-aryamontri, Andrew; Chang, Christie S.; Oughtred, Rose; Rust, Jennifer; Wilbur, W. John; Comeau, Donald C.; Dolinski, Kara; Tyers, Mike
2017-01-01
A great deal of information on the molecular genetics and biochemistry of model organisms has been reported in the scientific literature. However, this data is typically described in free text form and is not readily amenable to computational analyses. To this end, the BioGRID database systematically curates the biomedical literature for genetic and protein interaction data. This data is provided in a standardized computationally tractable format and includes structured annotation of experimental evidence. BioGRID curation necessarily involves substantial human effort by expert curators who must read each publication to extract the relevant information. Computational text-mining methods offer the potential to augment and accelerate manual curation. To facilitate the development of practical text-mining strategies, a new challenge was organized in BioCreative V for the BioC task, the collaborative Biocurator Assistant Task. This was a non-competitive, cooperative task in which the participants worked together to build BioC-compatible modules into an integrated pipeline to assist BioGRID curators. As an integral part of this task, a test collection of full text articles was developed that contained both biological entity annotations (gene/protein and organism/species) and molecular interaction annotations (protein–protein and genetic interactions (PPIs and GIs)). This collection, which we call the BioC-BioGRID corpus, was annotated by four BioGRID curators over three rounds of annotation and contains 120 full text articles curated in a dataset representing two major model organisms, namely budding yeast and human. The BioC-BioGRID corpus contains annotations for 6409 mentions of genes and their Entrez Gene IDs, 186 mentions of organism names and their NCBI Taxonomy IDs, 1867 mentions of PPIs and 701 annotations of PPI experimental evidence statements, 856 mentions of GIs and 399 annotations of GI evidence statements. The purpose, characteristics and possible future uses of the BioC-BioGRID corpus are detailed in this report. Database URL: http://bioc.sourceforge.net/BioC-BioGRID.html PMID:28077563
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swindeman, M. J.; Jetter, R. I.; Sham, T. -L.
One of the objectives of the high temperature design methodology activities is to develop and validate both improvements and the basic features of ASME Boiler and Pressure Vessel Code, Section III, Rules for Construction of Nuclear Facility Components, Division 5, High Temperature Reactors, Subsection HB, Subpart B (HBB). The overall scope of this task is to develop a computer program to aid assessment procedures of components under specified loading conditions in accordance with the elevated temperature design requirements for Division 5 Class A components. There are many features and alternative paths of varying complexity in HBB. The initial focus ofmore » this computer program is a basic path through the various options for a single reference material, 316H stainless steel. However, the computer program is being structured for eventual incorporation all of the features and permitted materials of HBB. This report will first provide a description of the overall computer program, particular challenges in developing numerical procedures for the assessment, and an overall approach to computer program development. This is followed by a more comprehensive appendix, which is the draft computer program manual for the program development. The strain limits rules have been implemented in the computer program. The evaluation of creep-fatigue damage will be implemented in future work scope.« less
Williams, Kent E; Voigt, Jeffrey R
2004-01-01
The research reported herein presents the results of an empirical evaluation that focused on the accuracy and reliability of cognitive models created using a computerized tool: the cognitive analysis tool for human-computer interaction (CAT-HCI). A sample of participants, expert in interacting with a newly developed tactical display for the U.S. Army's Bradley Fighting Vehicle, individually modeled their knowledge of 4 specific tasks employing the CAT-HCI tool. Measures of the accuracy and consistency of task models created by these task domain experts using the tool were compared with task models created by a double expert. The findings indicated a high degree of consistency and accuracy between the different "single experts" in the task domain in terms of the resultant models generated using the tool. Actual or potential applications of this research include assessing human-computer interaction complexity, determining the productivity of human-computer interfaces, and analyzing an interface design to determine whether methods can be automated.
NASA Astrophysics Data System (ADS)
Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun
2018-02-01
Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.
Computational Biochemistry-Enzyme Mechanisms Explored.
Culka, Martin; Gisdon, Florian J; Ullmann, G Matthias
2017-01-01
Understanding enzyme mechanisms is a major task to achieve in order to comprehend how living cells work. Recent advances in biomolecular research provide huge amount of data on enzyme kinetics and structure. The analysis of diverse experimental results and their combination into an overall picture is, however, often challenging. Microscopic details of the enzymatic processes are often anticipated based on several hints from macroscopic experimental data. Computational biochemistry aims at creation of a computational model of an enzyme in order to explain microscopic details of the catalytic process and reproduce or predict macroscopic experimental findings. Results of such computations are in part complementary to experimental data and provide an explanation of a biochemical process at the microscopic level. In order to evaluate the mechanism of an enzyme, a structural model is constructed which can be analyzed by several theoretical approaches. Several simulation methods can and should be combined to get a reliable picture of the process of interest. Furthermore, abstract models of biological systems can be constructed combining computational and experimental data. In this review, we discuss structural computational models of enzymatic systems. We first discuss various models to simulate enzyme catalysis. Furthermore, we review various approaches how to characterize the enzyme mechanism both qualitatively and quantitatively using different modeling approaches. © 2017 Elsevier Inc. All rights reserved.
Mode-sum regularization of ⟨ϕ2⟩ in the angular-splitting method
NASA Astrophysics Data System (ADS)
Levi, Adam; Ori, Amos
2016-08-01
The computation of the renormalized stress-energy tensor or ⟨ϕ2⟩ren in curved spacetime is a challenging task, at both the conceptual and technical levels. Recently we developed a new approach to compute such renormalized quantities in asymptotically flat curved spacetimes, based on the point-splitting procedure. Our approach requires the spacetime to admit some symmetry. We already implemented this approach to compute ⟨ϕ2⟩ren in a stationary spacetime using t splitting, namely splitting in the time-translation direction. Here we present the angular-splitting version of this approach, aimed for computing renormalized quantities in a general (possibly dynamical) spherically symmetric spacetime. To illustrate how the angular-splitting method works, we use it here to compute ⟨ϕ2⟩ren for a quantum massless scalar field in Schwarzschild background, in various quantum states (Boulware, Unruh, and Hartle-Hawking states). We find excellent agreement with the results obtained from the t -splitting variant and also with other methods. Our main goal in pursuing this new mode-sum approach was to enable the computation of the renormalized stress-energy tensor in a dynamical spherically symmetric background, e.g. an evaporating black hole. The angular-splitting variant presented here is most suitable to this purpose.
Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B
2010-02-01
Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.
Interactive Computer Based Assessment Tasks: How Problem-Solving Process Data Can Inform Instruction
ERIC Educational Resources Information Center
Zoanetti, Nathan
2010-01-01
This article presents key steps in the design and analysis of a computer based problem-solving assessment featuring interactive tasks. The purpose of the assessment is to support targeted instruction for students by diagnosing strengths and weaknesses at different stages of problem-solving. The first focus of this article is the task piloting…
Item Mass and Complexity and the Arithmetic Computation of Students with Learning Disabilities.
ERIC Educational Resources Information Center
Cawley, John F.; Shepard, Teri; Smith, Maureen; Parmar, Rene S.
1997-01-01
The performance of 76 students (ages 10 to 15) with learning disabilities on four tasks of arithmetic computation within each of the four basic operations was examined. Tasks varied in difficulty level and number of strokes needed to complete all items. Intercorrelations between task sets and operations were examined as was the use of…
Task Scheduling in Desktop Grids: Open Problems
NASA Astrophysics Data System (ADS)
Chernov, Ilya; Nikitina, Natalia; Ivashko, Evgeny
2017-12-01
We survey the areas of Desktop Grid task scheduling that seem to be insufficiently studied so far and are promising for efficiency, reliability, and quality of Desktop Grid computing. These topics include optimal task grouping, "needle in a haystack" paradigm, game-theoretical scheduling, domain-imposed approaches, special optimization of the final stage of the batch computation, and Enterprise Desktop Grids.
ERIC Educational Resources Information Center
Shamsudin, Sarimah; Nesi, Hilary
2006-01-01
This paper will describe an ESP approach to the design and implementation of computer-mediated communication (CMC) tasks for computer science students at Universiti Teknologi Malaysia, and discuss the effectiveness of the chat feature of Windows NetMeeting as a tool for developing specified language skills. CMC tasks were set within a programme of…
Simplified Distributed Computing
NASA Astrophysics Data System (ADS)
Li, G. G.
2006-05-01
The distributed computing runs from high performance parallel computing, GRID computing, to an environment where idle CPU cycles and storage space of numerous networked systems are harnessed to work together through the Internet. In this work we focus on building an easy and affordable solution for computationally intensive problems in scientific applications based on existing technology and hardware resources. This system consists of a series of controllers. When a job request is detected by a monitor or initialized by an end user, the job manager launches the specific job handler for this job. The job handler pre-processes the job, partitions the job into relative independent tasks, and distributes the tasks into the processing queue. The task handler picks up the related tasks, processes the tasks, and puts the results back into the processing queue. The job handler also monitors and examines the tasks and the results, and assembles the task results into the overall solution for the job request when all tasks are finished for each job. A resource manager configures and monitors all participating notes. A distributed agent is deployed on all participating notes to manage the software download and report the status. The processing queue is the key to the success of this distributed system. We use BEA's Weblogic JMS queue in our implementation. It guarantees the message delivery and has the message priority and re-try features so that the tasks never get lost. The entire system is built on the J2EE technology and it can be deployed on heterogeneous platforms. It can handle algorithms and applications developed in any languages on any platforms. J2EE adaptors are provided to manage and communicate the existing applications to the system so that the applications and algorithms running on Unix, Linux and Windows can all work together. This system is easy and fast to develop based on the industry's well-adopted technology. It is highly scalable and heterogeneous. It is an open system and any number and type of machines can join the system to provide the computational power. This asynchronous message-based system can achieve second of response time. For efficiency, communications between distributed tasks are often done at the start and end of the tasks but intermediate status of the tasks can also be provided.
Karunaratne, Asuntha S; Korenman, Stanley G; Thomas, Samantha L; Myles, Paul S; Komesaroff, Paul A
2010-04-05
To assess the efficacy, with respect to participant understanding of information, of a computer-based approach to communication about complex, technical issues that commonly arise when seeking informed consent for clinical research trials. An open, randomised controlled study of 60 patients with diabetes mellitus, aged 27-70 years, recruited between August 2006 and October 2007 from the Department of Diabetes and Endocrinology at the Alfred Hospital and Baker IDI Heart and Diabetes Institute, Melbourne. Participants were asked to read information about a mock study via a computer-based presentation (n = 30) or a conventional paper-based information statement (n = 30). The computer-based presentation contained visual aids, including diagrams, video, hyperlinks and quiz pages. Understanding of information as assessed by quantitative and qualitative means. Assessment scores used to measure level of understanding were significantly higher in the group that completed the computer-based task than the group that completed the paper-based task (82% v 73%; P = 0.005). More participants in the group that completed the computer-based task expressed interest in taking part in the mock study (23 v 17 participants; P = 0.01). Most participants from both groups preferred the idea of a computer-based presentation to the paper-based statement (21 in the computer-based task group, 18 in the paper-based task group). A computer-based method of providing information may help overcome existing deficiencies in communication about clinical research, and may reduce costs and improve efficiency in recruiting participants for clinical trials.
Clothing Matching for Visually Impaired Persons
Yuan, Shuai; Tian, YingLi; Arditi, Aries
2012-01-01
Matching clothes is a challenging task for many blind people. In this paper, we present a proof of concept system to solve this problem. The system consists of 1) a camera connected to a computer to perform pattern and color matching process; 2) speech commands for system control and configuration; and 3) audio feedback to provide matching results for both color and patterns of clothes. This system can handle clothes in deficient color without any pattern, as well as clothing with multiple colors and complex patterns to aid both blind and color deficient people. Furthermore, our method is robust to variations of illumination, clothing rotation and wrinkling. To evaluate the proposed prototype, we collect two challenging databases including clothes without any pattern, or with multiple colors and different patterns under different conditions of lighting and rotation. Results reported here demonstrate the robustness and effectiveness of the proposed clothing matching system. PMID:22523465
NASA Astrophysics Data System (ADS)
Dyer, Mark; Grey, Thomas; Kinnane, Oliver
2017-11-01
It has become increasingly common for tasks traditionally carried out by engineers to be undertaken by technicians and technologist with access to sophisticated computers and software that can often perform complex calculations that were previously the responsibility of engineers. Not surprisingly, this development raises serious questions about the future role of engineers and the education needed to address these changes in technology as well as emerging priorities from societal to environmental challenges. In response to these challenges, a new design module was created for undergraduate engineering students to design and build temporary shelters for a wide variety of end users from refugees, to the homeless and children. Even though the module provided guidance on principles of design thinking and methods for observing users needs through field studies, the students found it difficult to respond to needs of specific end users but instead focused more on purely technical issues.
Reconfigurable intelligent sensors for health monitoring: a case study of pulse oximeter sensor.
Jovanov, E; Milenkovic, A; Basham, S; Clark, D; Kelley, D
2004-01-01
Design of low-cost, miniature, lightweight, ultra low-power, intelligent sensors capable of customization and seamless integration into a body area network for health monitoring applications presents one of the most challenging tasks for system designers. To answer this challenge we propose a reconfigurable intelligent sensor platform featuring a low-power microcontroller, a low-power programmable logic device, a communication interface, and a signal conditioning circuit. The proposed solution promises a cost-effective, flexible platform that allows easy customization, run-time reconfiguration, and energy-efficient computation and communication. The development of a common platform for multiple physical sensors and a repository of both software procedures and soft intellectual property cores for hardware acceleration will increase reuse and alleviate costs of transition to a new generation of sensors. As a case study, we present an implementation of a reconfigurable pulse oximeter sensor.
Clothing Matching for Visually Impaired Persons.
Yuan, Shuai; Tian, Yingli; Arditi, Aries
2011-01-01
Matching clothes is a challenging task for many blind people. In this paper, we present a proof of concept system to solve this problem. The system consists of 1) a camera connected to a computer to perform pattern and color matching process; 2) speech commands for system control and configuration; and 3) audio feedback to provide matching results for both color and patterns of clothes. This system can handle clothes in deficient color without any pattern, as well as clothing with multiple colors and complex patterns to aid both blind and color deficient people. Furthermore, our method is robust to variations of illumination, clothing rotation and wrinkling. To evaluate the proposed prototype, we collect two challenging databases including clothes without any pattern, or with multiple colors and different patterns under different conditions of lighting and rotation. Results reported here demonstrate the robustness and effectiveness of the proposed clothing matching system.
Intelligent computer-aided training and tutoring
NASA Technical Reports Server (NTRS)
Loftin, R. Bowen; Savely, Robert T.
1991-01-01
Specific autonomous training systems based on artificial intelligence technology for use by NASA astronauts, flight controllers, and ground-based support personnel that demonstrate an alternative to current training systems are described. In addition to these specific systems, the evolution of a general architecture for autonomous intelligent training systems that integrates many of the features of traditional training programs with artificial intelligence techniques is presented. These Intelligent Computer-Aided Training (ICAT) systems would provide, for the trainee, much of the same experience that could be gained from the best on-the-job training. By integrating domain expertise with a knowledge of appropriate training methods, an ICAT session should duplicate, as closely as possible, the trainee undergoing on-the-job training in the task environment, benefitting from the full attention of a task expert who is also an expert trainer. Thus, the philosophy of the ICAT system is to emulate the behavior of an experienced individual devoting his full time and attention to the training of a novice - proposing challenging training scenarios, monitoring and evaluating the actions of the trainee, providing meaningful comments in response to trainee errors, responding to trainee requests for information, giving hints (if appropriate), and remembering the strengths and weaknesses displayed by the trainee so that appropriate future exercises can be designed.
Enabling Efficient Climate Science Workflows in High Performance Computing Environments
NASA Astrophysics Data System (ADS)
Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.
2015-12-01
A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.
On robust parameter estimation in brain-computer interfacing
NASA Astrophysics Data System (ADS)
Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert
2017-12-01
Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.
How should Fitts' Law be applied to human-computer interaction?
NASA Technical Reports Server (NTRS)
Gillan, D. J.; Holden, K.; Adam, S.; Rudisill, M.; Magee, L.
1992-01-01
The paper challenges the notion that any Fitts' Law model can be applied generally to human-computer interaction, and proposes instead that applying Fitts' Law requires knowledge of the users' sequence of movements, direction of movement, and typical movement amplitudes as well as target sizes. Two experiments examined a text selection task with sequences of controlled movements (point-click and point-drag). For the point-click sequence, a Fitts' Law model that used the diagonal across the text object in the direction of pointing (rather than the horizontal extent of the text object) as the target size provided the best fit for the pointing time data, whereas for the point-drag sequence, a Fitts' Law model that used the vertical size of the text object as the target size gave the best fit. Dragging times were fitted well by Fitts' Law models that used either the vertical or horizontal size of the terminal character in the text object. Additional results of note were that pointing in the point-click sequence was consistently faster than in the point-drag sequence, and that pointing in either sequence was consistently faster than dragging. The discussion centres around the need to define task characteristics before applying Fitts' Law to an interface design or analysis, analyses of pointing and of dragging, and implications for interface design.
NASA Astrophysics Data System (ADS)
Widge, Alik S.; Moritz, Chet T.
2014-04-01
Objective. There is great interest in closed-loop neurostimulators that sense and respond to a patient's brain state. Such systems may have value for neurological and psychiatric illnesses where symptoms have high intraday variability. Animal models of closed-loop stimulators would aid preclinical testing. We therefore sought to demonstrate that rodents can directly control a closed-loop limbic neurostimulator via a brain-computer interface (BCI). Approach. We trained rats to use an auditory BCI controlled by single units in prefrontal cortex (PFC). The BCI controlled electrical stimulation in the medial forebrain bundle, a limbic structure involved in reward-seeking. Rigorous offline analyses were performed to confirm volitional control of the neurostimulator. Main results. All animals successfully learned to use the BCI and neurostimulator, with closed-loop control of this challenging task demonstrated at 80% of PFC recording locations. Analysis across sessions and animals confirmed statistically robust BCI control and specific, rapid modulation of PFC activity. Significance. Our results provide a preliminary demonstration of a method for emotion-regulating closed-loop neurostimulation. They further suggest that activity in PFC can be used to control a BCI without pre-training on a predicate task. This offers the potential for BCI-based treatments in refractory neurological and mental illness.
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.
Advanced information processing system: Local system services
NASA Technical Reports Server (NTRS)
Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter
1989-01-01
The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.